2025-08-14T21:21:16.9458824Z Current runner version: '2.328.0' 2025-08-14T21:21:16.9465361Z Runner name: 'i-0019fc24284416ca3' 2025-08-14T21:21:16.9466261Z Runner group name: 'default' 2025-08-14T21:21:16.9467061Z Machine name: 'ip-10-0-56-34' 2025-08-14T21:21:16.9469706Z ##[group]GITHUB_TOKEN Permissions 2025-08-14T21:21:16.9471994Z Contents: read 2025-08-14T21:21:16.9472503Z Metadata: read 2025-08-14T21:21:16.9472922Z ##[endgroup] 2025-08-14T21:21:16.9475187Z Secret source: Actions 2025-08-14T21:21:16.9480226Z Prepare workflow directory 2025-08-14T21:21:17.0068865Z Prepare all required actions 2025-08-14T21:21:17.0108079Z Getting action download info 2025-08-14T21:21:17.3136189Z Download action repository 'pytorch/test-infra@main' (SHA:83f58f391e939c10dcb8cb6d745e4cefa3b98a84) 2025-08-14T21:21:19.3557172Z Download action repository 'pytorch/pytorch@main' (SHA:3be70dc30e893b552fc0f23ca06cd8f7949b6d08) 2025-08-14T21:21:33.1706753Z Download action repository 'actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065' (SHA:a26af69be951a213d495a4c3e4e4022e16d87065) 2025-08-14T21:21:33.5304972Z Download action repository 'aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722' (SHA:ececac1a45f3b08a01d2dd070d28d111c5fe6722) 2025-08-14T21:21:33.7637797Z Download action repository 'aws-actions/amazon-ecr-login@062b18b96a7aff071d4dc91bc00c4c1a7945b076' (SHA:062b18b96a7aff071d4dc91bc00c4c1a7945b076) 2025-08-14T21:21:34.0161199Z Download action repository 'seemethere/upload-artifact-s3@baba72d0712b404f646cebe0730933554ebce96a' (SHA:baba72d0712b404f646cebe0730933554ebce96a) 2025-08-14T21:21:34.3129621Z Getting action download info 2025-08-14T21:21:34.4205460Z Download action repository 'actions/checkout@v4' (SHA:08eba0b27e820071cde6df949e0beb9ba4906955) 2025-08-14T21:21:34.6792683Z Getting action download info 2025-08-14T21:21:34.8005887Z Download action repository 'nick-fields/retry@v3.0.0' (SHA:7152eba30c6575329ac0576536151aca5a72780e) 2025-08-14T21:21:35.0165889Z Getting action download info 2025-08-14T21:21:35.1229946Z Download action repository 'nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482' (SHA:3e91a01664abd3c5cd539100d10d33b9c5b68482) 2025-08-14T21:21:35.3189545Z Getting action download info 2025-08-14T21:21:35.4669726Z Uses: pytorch/pytorch/.github/workflows/_linux-test.yml@refs/heads/main (1fc683cf17c8c673044538d10266c00f92987be2) 2025-08-14T21:21:35.4673738Z ##[group] Inputs 2025-08-14T21:21:35.4674064Z build-environment: linux-jammy-py3.9-gcc11-build 2025-08-14T21:21:35.4676886Z test-matrix: {"include": [{"config": "cpu_inductor_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_avx2_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_timm", "shard": 1, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_timm", "shard": 2, "num_shards": 2, "runner": "linux.10xlarge.avx2"}]} 2025-08-14T21:21:35.4680162Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:21:35.4680826Z sync-tag: 2025-08-14T21:21:35.4681647Z timeout-minutes: 240 2025-08-14T21:21:35.4681872Z use-gha: 2025-08-14T21:21:35.4682285Z dashboard-tag: 2025-08-14T21:21:35.4682611Z s3-bucket: gha-artifacts 2025-08-14T21:21:35.4682843Z aws-role-to-assume: 2025-08-14T21:21:35.4683378Z disable-monitor: false 2025-08-14T21:21:35.4683636Z monitor-log-interval: 5 2025-08-14T21:21:35.4683892Z monitor-data-collect-interval: 1 2025-08-14T21:21:35.4684224Z ##[endgroup] 2025-08-14T21:21:35.4689061Z Complete job name: linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T21:21:35.5394243Z A job started hook has been configured by the self-hosted runner administrator 2025-08-14T21:21:35.5506683Z ##[group]Run '/home/ec2-user/runner-scripts/before_job.sh' 2025-08-14T21:21:35.5519524Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:21:35.5520077Z ##[endgroup] 2025-08-14T21:21:37.1562990Z Runner Type: linux.10xlarge.avx2 2025-08-14T21:21:37.1563504Z Instance Type: m4.10xlarge 2025-08-14T21:21:37.1563726Z AMI Name: unknown 2025-08-14T21:21:37.1602523Z AMI ID: ami-05ffe3c48a9991133 2025-08-14T21:21:42.8557225Z ##[group]Run pytorch/test-infra/.github/actions/setup-ssh@main 2025-08-14T21:21:42.8557628Z with: 2025-08-14T21:21:42.8558211Z github-secret: *** 2025-08-14T21:21:42.8558757Z instructions: All testing is done inside the container, to start an interactive session run: docker exec -it $(docker container ps --format '{{.ID}}') bash 2025-08-14T21:21:42.8559334Z activate-with-label: false 2025-08-14T21:21:42.8559569Z label: with-ssh 2025-08-14T21:21:42.8559777Z remove-existing-keys: true 2025-08-14T21:21:42.8559995Z fail-silently: true 2025-08-14T21:21:42.8560206Z env: 2025-08-14T21:21:42.8560387Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:21:42.8560601Z ##[endgroup] 2025-08-14T21:21:43.0073700Z Please see https://github.com/pytorch/pytorch/wiki/Debugging-using-with-ssh-for-Github-Actions for more info. 2025-08-14T21:21:43.0075013Z Not on pull request and ciflow reference could not be extracted, skipping adding ssh keys 2025-08-14T21:21:43.0371953Z ##[group]Run pytorch/pytorch/.github/actions/checkout-pytorch@main 2025-08-14T21:21:43.0372301Z with: 2025-08-14T21:21:43.0372492Z no-sudo: true 2025-08-14T21:21:43.0372691Z submodules: recursive 2025-08-14T21:21:43.0372908Z fetch-depth: 0 2025-08-14T21:21:43.0373088Z env: 2025-08-14T21:21:43.0373267Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:21:43.0373488Z ##[endgroup] 2025-08-14T21:21:43.0489450Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-08-14T21:21:43.0490265Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-08-14T21:21:43.0502944Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:21:43.0503258Z env: 2025-08-14T21:21:43.0503475Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:21:43.0503724Z ##[endgroup] 2025-08-14T21:21:43.0600624Z ##[group]Run # Use all available CPUs for fetching 2025-08-14T21:21:43.0601372Z # Use all available CPUs for fetching 2025-08-14T21:21:43.0601888Z cd "${GITHUB_WORKSPACE}" 2025-08-14T21:21:43.0602398Z git config --global fetch.parallel 0 2025-08-14T21:21:43.0602988Z git config --global submodule.fetchJobs 0 2025-08-14T21:21:43.0603515Z  2025-08-14T21:21:43.0603978Z # Clean workspace. The default checkout action should also do this, but 2025-08-14T21:21:43.0604582Z # do it here as well just in case 2025-08-14T21:21:43.0604958Z if [[ -d .git ]]; then 2025-08-14T21:21:43.0605195Z  if [ -z "${NO_SUDO}" ]; then 2025-08-14T21:21:43.0605442Z  sudo git clean -ffdx 2025-08-14T21:21:43.0605671Z  else 2025-08-14T21:21:43.0605862Z  git clean -ffdx 2025-08-14T21:21:43.0606132Z  fi 2025-08-14T21:21:43.0606309Z fi 2025-08-14T21:21:43.0615381Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:21:43.0615821Z env: 2025-08-14T21:21:43.0616096Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:21:43.0616329Z NO_SUDO: true 2025-08-14T21:21:43.0616511Z ##[endgroup] 2025-08-14T21:21:43.0746322Z ##[group]Run actions/checkout@v4 2025-08-14T21:21:43.0746594Z with: 2025-08-14T21:21:43.0746817Z ref: 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:21:43.0747136Z fetch-depth: 0 2025-08-14T21:21:43.0747345Z submodules: recursive 2025-08-14T21:21:43.0747583Z show-progress: false 2025-08-14T21:21:43.0747820Z repository: pytorch/pytorch 2025-08-14T21:21:43.0748213Z token: *** 2025-08-14T21:21:43.0748405Z ssh-strict: true 2025-08-14T21:21:43.0748612Z ssh-user: git 2025-08-14T21:21:43.0749221Z persist-credentials: true 2025-08-14T21:21:43.0749438Z clean: true 2025-08-14T21:21:43.0749666Z sparse-checkout-cone-mode: true 2025-08-14T21:21:43.0749913Z fetch-tags: false 2025-08-14T21:21:43.0750096Z lfs: false 2025-08-14T21:21:43.0750286Z set-safe-directory: true 2025-08-14T21:21:43.0750516Z env: 2025-08-14T21:21:43.0750685Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:21:43.0750897Z ##[endgroup] 2025-08-14T21:21:43.1996827Z Syncing repository: pytorch/pytorch 2025-08-14T21:21:43.2002371Z ##[group]Getting Git version info 2025-08-14T21:21:43.2002801Z Working directory is '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2025-08-14T21:21:43.2003357Z [command]/usr/bin/git version 2025-08-14T21:21:43.2162383Z git version 2.47.1 2025-08-14T21:21:43.2191982Z ##[endgroup] 2025-08-14T21:21:43.2200197Z Copying '/home/ec2-user/.gitconfig' to '/home/ec2-user/actions-runner/_work/_temp/42d5ce44-f8bd-4c7f-a1eb-0920256d71ce/.gitconfig' 2025-08-14T21:21:43.2224653Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/42d5ce44-f8bd-4c7f-a1eb-0920256d71ce' before making global git config changes 2025-08-14T21:21:43.2225664Z Adding repository directory to the temporary git global config as a safe directory 2025-08-14T21:21:43.2228156Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-08-14T21:21:43.2278173Z Deleting the contents of '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2025-08-14T21:21:43.2280918Z ##[group]Initializing the repository 2025-08-14T21:21:43.2284739Z [command]/usr/bin/git init /home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-08-14T21:21:43.2340254Z hint: Using 'master' as the name for the initial branch. This default branch name 2025-08-14T21:21:43.2340858Z hint: is subject to change. To configure the initial branch name to use in all 2025-08-14T21:21:43.2341409Z hint: of your new repositories, which will suppress this warning, call: 2025-08-14T21:21:43.2341811Z hint: 2025-08-14T21:21:43.2342062Z hint: git config --global init.defaultBranch 2025-08-14T21:21:43.2342335Z hint: 2025-08-14T21:21:43.2342609Z hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and 2025-08-14T21:21:43.2343046Z hint: 'development'. The just-created branch can be renamed via this command: 2025-08-14T21:21:43.2343388Z hint: 2025-08-14T21:21:43.2343578Z hint: git branch -m 2025-08-14T21:21:43.2355530Z Initialized empty Git repository in /home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/ 2025-08-14T21:21:43.2356550Z [command]/usr/bin/git remote add origin https://github.com/pytorch/pytorch 2025-08-14T21:21:43.2404213Z ##[endgroup] 2025-08-14T21:21:43.2404668Z ##[group]Disabling automatic garbage collection 2025-08-14T21:21:43.2408327Z [command]/usr/bin/git config --local gc.auto 0 2025-08-14T21:21:43.2437662Z ##[endgroup] 2025-08-14T21:21:43.2443311Z ##[group]Setting up auth 2025-08-14T21:21:43.2443656Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-08-14T21:21:43.2471358Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-08-14T21:21:43.2832216Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-08-14T21:21:43.2861050Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-08-14T21:21:43.3184271Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-08-14T21:21:43.3235489Z ##[endgroup] 2025-08-14T21:21:43.3235890Z ##[group]Fetching the repository 2025-08-14T21:21:43.3242672Z [command]/usr/bin/git -c protocol.version=2 fetch --prune --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* 2025-08-14T21:22:29.9273415Z From https://github.com/pytorch/pytorch 2025-08-14T21:22:29.9273957Z * [new branch] 2.6.0.dev20241004+ -> origin/2.6.0.dev20241004+ 2025-08-14T21:22:29.9274507Z * [new branch] 5addvllmbuild -> origin/5addvllmbuild 2025-08-14T21:22:29.9275098Z * [new branch] AaronWang04_addmmfusion_perftest -> origin/AaronWang04_addmmfusion_perftest 2025-08-14T21:22:29.9275774Z * [new branch] HDCharles-2.6.0-release-notes -> origin/HDCharles-2.6.0-release-notes 2025-08-14T21:22:29.9276338Z * [new branch] JackCaoG/dynamo_make_fx_non_core_aten_ops -> origin/JackCaoG/dynamo_make_fx_non_core_aten_ops 2025-08-14T21:22:29.9277373Z * [new branch] PR-AOTInductorNoneBug -> origin/PR-AOTInductorNoneBug 2025-08-14T21:22:29.9278912Z * [new branch] PR-AOTInductorNoneBugFix -> origin/PR-AOTInductorNoneBugFix 2025-08-14T21:22:29.9279735Z * [new branch] PR-FixConfigsIssue -> origin/PR-FixConfigsIssue 2025-08-14T21:22:29.9280654Z * [new branch] PR-NoneBugFix-viable -> origin/PR-NoneBugFix-viable 2025-08-14T21:22:29.9281765Z * [new branch] PR-ResetToZero -> origin/PR-ResetToZero 2025-08-14T21:22:29.9283153Z * [new branch] Update-Flash-Packaging -> origin/Update-Flash-Packaging 2025-08-14T21:22:29.9283894Z * [new branch] add-missing-args-normalization -> origin/add-missing-args-normalization 2025-08-14T21:22:29.9284889Z * [new branch] add-user-guide-structure -> origin/add-user-guide-structure 2025-08-14T21:22:29.9285862Z * [new branch] addVllmPin -> origin/addVllmPin 2025-08-14T21:22:29.9286870Z * [new branch] add_windows_testing_back -> origin/add_windows_testing_back 2025-08-14T21:22:29.9288095Z * [new branch] addbuildvllm -> origin/addbuildvllm 2025-08-14T21:22:29.9293889Z * [new branch] addmm-heuristic -> origin/addmm-heuristic 2025-08-14T21:22:29.9298926Z * [new branch] addsimde -> origin/addsimde 2025-08-14T21:22:29.9299461Z * [new branch] addvllpinnedfile -> origin/addvllpinnedfile 2025-08-14T21:22:29.9299899Z * [new branch] adi/acl_upgrade -> origin/adi/acl_upgrade 2025-08-14T21:22:29.9300316Z * [new branch] adi/skip_slow_tests -> origin/adi/skip_slow_tests 2025-08-14T21:22:29.9300709Z * [new branch] adi/test -> origin/adi/test 2025-08-14T21:22:29.9301075Z * [new branch] adi/test_bgemm -> origin/adi/test_bgemm 2025-08-14T21:22:29.9301458Z * [new branch] adi/test_fusions -> origin/adi/test_fusions 2025-08-14T21:22:29.9301943Z * [new branch] adi/test_onednn_v3.9 -> origin/adi/test_onednn_v3.9 2025-08-14T21:22:29.9302869Z * [new branch] adi/test_presve_change -> origin/adi/test_presve_change 2025-08-14T21:22:29.9309563Z * [new branch] adi/test_timm -> origin/adi/test_timm 2025-08-14T21:22:29.9310047Z * [new branch] adi/testpresve_change -> origin/adi/testpresve_change 2025-08-14T21:22:29.9310555Z * [new branch] aditew01/test/vec_bf16 -> origin/aditew01/test/vec_bf16 2025-08-14T21:22:29.9311160Z * [new branch] ah-globalfeedback-hook -> origin/ah-globalfeedback-hook 2025-08-14T21:22:29.9311664Z * [new branch] albanD-patch-1 -> origin/albanD-patch-1 2025-08-14T21:22:29.9312054Z * [new branch] alt-disable -> origin/alt-disable 2025-08-14T21:22:29.9312597Z * [new branch] angelayi/aoti_additional_files -> origin/angelayi/aoti_additional_files 2025-08-14T21:22:29.9313078Z * [new branch] angelayi/aoti_inductor_fx -> origin/angelayi/aoti_inductor_fx 2025-08-14T21:22:29.9313682Z * [new branch] angelayi/assert_tensor_metadata_device -> origin/angelayi/assert_tensor_metadata_device 2025-08-14T21:22:29.9314393Z * [new branch] angelayi/benchmark -> origin/angelayi/benchmark 2025-08-14T21:22:29.9315354Z * [new branch] angelayi/benchmark2 -> origin/angelayi/benchmark2 2025-08-14T21:22:29.9316623Z * [new branch] angelayi/change_pytree_serialization -> origin/angelayi/change_pytree_serialization 2025-08-14T21:22:29.9317714Z * [new branch] angelayi/cpp_loader -> origin/angelayi/cpp_loader 2025-08-14T21:22:29.9323034Z * [new branch] angelayi/custom_op_subgraph -> origin/angelayi/custom_op_subgraph 2025-08-14T21:22:29.9324485Z * [new branch] angelayi/customop -> origin/angelayi/customop 2025-08-14T21:22:29.9325454Z * [new branch] angelayi/del_lib -> origin/angelayi/del_lib 2025-08-14T21:22:29.9326351Z * [new branch] angelayi/docs -> origin/angelayi/docs 2025-08-14T21:22:29.9327319Z * [new branch] angelayi/docs2 -> origin/angelayi/docs2 2025-08-14T21:22:29.9328298Z * [new branch] angelayi/fix_pt2 -> origin/angelayi/fix_pt2 2025-08-14T21:22:29.9329510Z * [new branch] angelayi/logging.bak -> origin/angelayi/logging.bak 2025-08-14T21:22:29.9330259Z * [new branch] angelayi/logging2 -> origin/angelayi/logging2 2025-08-14T21:22:29.9331276Z * [new branch] angelayi/no_so_weight -> origin/angelayi/no_so_weight 2025-08-14T21:22:29.9332181Z * [new branch] angelayi/pytree -> origin/angelayi/pytree 2025-08-14T21:22:29.9333313Z * [new branch] angelayi/save_error -> origin/angelayi/save_error 2025-08-14T21:22:29.9337824Z * [new branch] angelayi/scan_layers -> origin/angelayi/scan_layers 2025-08-14T21:22:29.9338279Z * [new branch] angelayi/symint_input -> origin/angelayi/symint_input 2025-08-14T21:22:29.9338744Z * [new branch] angelayi/tensor_nn_module_meta -> origin/angelayi/tensor_nn_module_meta 2025-08-14T21:22:29.9339255Z * [new branch] angelayi/torch_size -> origin/angelayi/torch_size 2025-08-14T21:22:29.9339668Z * [new branch] aoti-cuda-alloc -> origin/aoti-cuda-alloc 2025-08-14T21:22:29.9340054Z * [new branch] aoti_weight_sharing -> origin/aoti_weight_sharing 2025-08-14T21:22:29.9340693Z * [new branch] arsh/symint_mm_ind_decomp -> origin/arsh/symint_mm_ind_decomp 2025-08-14T21:22:29.9341833Z * [new branch] atalman-inductor-perf-cu124 -> origin/atalman-inductor-perf-cu124 2025-08-14T21:22:29.9342806Z * [new branch] atalman-inductor-perf-cu124.1 -> origin/atalman-inductor-perf-cu124.1 2025-08-14T21:22:29.9343769Z * [new branch] atalman-patch-1 -> origin/atalman-patch-1 2025-08-14T21:22:29.9344826Z * [new branch] atalman-patch-2 -> origin/atalman-patch-2 2025-08-14T21:22:29.9345896Z * [new branch] atalman-patch-3 -> origin/atalman-patch-3 2025-08-14T21:22:29.9346806Z * [new branch] atalman-patch-6 -> origin/atalman-patch-6 2025-08-14T21:22:29.9356478Z * [new branch] atalman-patch-7 -> origin/atalman-patch-7 2025-08-14T21:22:29.9357077Z * [new branch] atalman-patch-8 -> origin/atalman-patch-8 2025-08-14T21:22:29.9357675Z * [new branch] atalman_inductor_2.3.0 -> origin/atalman_inductor_2.3.0 2025-08-14T21:22:29.9358237Z * [new branch] atalman_inductor_2.3.1 -> origin/atalman_inductor_2.3.1 2025-08-14T21:22:29.9358842Z * [new branch] atalman_inductor_2.4.0 -> origin/atalman_inductor_2.4.0 2025-08-14T21:22:29.9359390Z * [new branch] atalman_inductor_2.4.x -> origin/atalman_inductor_2.4.x 2025-08-14T21:22:29.9360070Z * [new branch] autoupdate-transformers-pin-via-pr -> origin/autoupdate-transformers-pin-via-pr 2025-08-14T21:22:29.9360565Z * [new branch] backupvllm -> origin/backupvllm 2025-08-14T21:22:29.9361300Z * [new branch] base/1.5 -> origin/base/1.5 2025-08-14T21:22:29.9366838Z * [new branch] batching_sdpa_efficient_attention -> origin/batching_sdpa_efficient_attention 2025-08-14T21:22:29.9367362Z * [new branch] benchmark-updates -> origin/benchmark-updates 2025-08-14T21:22:29.9367870Z * [new branch] benchmarking-script -> origin/benchmarking-script 2025-08-14T21:22:29.9368450Z * [new branch] benjaminglass1/mark-large-tensor-tests-serial -> origin/benjaminglass1/mark-large-tensor-tests-serial 2025-08-14T21:22:29.9369072Z * [new branch] bertmaher/pinbump26 -> origin/bertmaher/pinbump26 2025-08-14T21:22:29.9369488Z * [new branch] bertrand/cutlass -> origin/bertrand/cutlass 2025-08-14T21:22:29.9370101Z * [new branch] bf/cg-log -> origin/bf/cg-log 2025-08-14T21:22:29.9371047Z * [new branch] bf/cg-remove-check -> origin/bf/cg-remove-check 2025-08-14T21:22:29.9372065Z * [new branch] bf/cg-skip-1-kernel -> origin/bf/cg-skip-1-kernel 2025-08-14T21:22:29.9372821Z * [new branch] bf/cudagraph -> origin/bf/cudagraph 2025-08-14T21:22:29.9373779Z * [new branch] bf/cudagraph-disable-input-mutation -> origin/bf/cudagraph-disable-input-mutation 2025-08-14T21:22:29.9375182Z * [new branch] bf/cudagraph-enable-input-mutation-support-benchmark -> origin/bf/cudagraph-enable-input-mutation-support-benchmark 2025-08-14T21:22:29.9376065Z * [new branch] bf/cudagraph-partition -> origin/bf/cudagraph-partition 2025-08-14T21:22:29.9377163Z * [new branch] bf/default-recompile-reason -> origin/bf/default-recompile-reason 2025-08-14T21:22:29.9385791Z * [new branch] bf/donated-buffer-bench -> origin/bf/donated-buffer-bench 2025-08-14T21:22:29.9386341Z * [new branch] bf/improve-kernel-bench -> origin/bf/improve-kernel-bench 2025-08-14T21:22:29.9386809Z * [new branch] bf/kernel-benchmark -> origin/bf/kernel-benchmark 2025-08-14T21:22:29.9387239Z * [new branch] bf/partition-doc -> origin/bf/partition-doc 2025-08-14T21:22:29.9387664Z * [new branch] bf/partition-move-cpu -> origin/bf/partition-move-cpu 2025-08-14T21:22:29.9388105Z * [new branch] bf/partition-turn-on -> origin/bf/partition-turn-on 2025-08-14T21:22:29.9388645Z * [new branch] bf/remove-check-55b0c39d -> origin/bf/remove-check-55b0c39d 2025-08-14T21:22:29.9389500Z * [new branch] bf/skip-asserts -> origin/bf/skip-asserts 2025-08-14T21:22:29.9390532Z * [new branch] bf16adamw -> origin/bf16adamw 2025-08-14T21:22:29.9397618Z * [new branch] bisect_perf_hf_T5_3acc6eac492 -> origin/bisect_perf_hf_T5_3acc6eac492 2025-08-14T21:22:29.9398262Z * [new branch] bisect_perf_hf_T5_3fcf66f61fb -> origin/bisect_perf_hf_T5_3fcf66f61fb 2025-08-14T21:22:29.9399344Z * [new branch] bisect_perf_hf_T5_4009d154129 -> origin/bisect_perf_hf_T5_4009d154129 2025-08-14T21:22:29.9399939Z * [new branch] bisect_perf_hf_T5_40d0740e73d -> origin/bisect_perf_hf_T5_40d0740e73d 2025-08-14T21:22:29.9400524Z * [new branch] bisect_perf_hf_T5_5268754e -> origin/bisect_perf_hf_T5_5268754e 2025-08-14T21:22:29.9401175Z * [new branch] bisect_perf_hf_T5_7d89a8d385c -> origin/bisect_perf_hf_T5_7d89a8d385c 2025-08-14T21:22:29.9401758Z * [new branch] bisect_perf_hf_T5_b7a25c1ee7c -> origin/bisect_perf_hf_T5_b7a25c1ee7c 2025-08-14T21:22:29.9402204Z * [new branch] bisect_perf_hf_T5_c25b201583f -> origin/bisect_perf_hf_T5_c25b201583f 2025-08-14T21:22:29.9402652Z * [new branch] bisect_perf_hf_T5_c93e57efac0 -> origin/bisect_perf_hf_T5_c93e57efac0 2025-08-14T21:22:29.9403105Z * [new branch] bisect_perf_hf_T5_ca9813ea149 -> origin/bisect_perf_hf_T5_ca9813ea149 2025-08-14T21:22:29.9403539Z * [new branch] bisect_perf_hf_T5_d65f194a -> origin/bisect_perf_hf_T5_d65f194a 2025-08-14T21:22:29.9403976Z * [new branch] bisect_perf_hf_T5_da94ab0b -> origin/bisect_perf_hf_T5_da94ab0b 2025-08-14T21:22:29.9404435Z * [new branch] bisect_perf_hf_T5_da94ab0b_new -> origin/bisect_perf_hf_T5_da94ab0b_new 2025-08-14T21:22:29.9404894Z * [new branch] bisect_perf_hf_T5_db4e8a1d8a8 -> origin/bisect_perf_hf_T5_db4e8a1d8a8 2025-08-14T21:22:29.9405330Z * [new branch] bisect_perf_hf_T5_e0d97e936a2 -> origin/bisect_perf_hf_T5_e0d97e936a2 2025-08-14T21:22:29.9405835Z * [new branch] bisect_perf_hf_T5_f23621ec563 -> origin/bisect_perf_hf_T5_f23621ec563 2025-08-14T21:22:29.9415686Z * [new branch] bowbao/bench_updates_stage -> origin/bowbao/bench_updates_stage 2025-08-14T21:22:29.9416604Z * [new branch] bowbao/dort_rewriter -> origin/bowbao/dort_rewriter 2025-08-14T21:22:29.9417656Z * [new branch] bowbao/wip_prs -> origin/bowbao/wip_prs 2025-08-14T21:22:29.9419023Z * [new branch] bowenbao/partial_min_max_reduce -> origin/bowenbao/partial_min_max_reduce 2025-08-14T21:22:29.9420363Z * [new branch] brister/always_wrapper_ir -> origin/brister/always_wrapper_ir 2025-08-14T21:22:29.9427193Z * [new branch] brister/flatten_contig -> origin/brister/flatten_contig 2025-08-14T21:22:29.9427846Z * [new branch] brister/test_block_ptr_same -> origin/brister/test_block_ptr_same 2025-08-14T21:22:29.9428577Z * [new branch] brister/tiled_reduction_no_numel_check -> origin/brister/tiled_reduction_no_numel_check 2025-08-14T21:22:29.9429189Z * [new branch] c57382a49 -> origin/c57382a49 2025-08-14T21:22:29.9429725Z * [new branch] ca_0431d47eaa -> origin/ca_0431d47eaa 2025-08-14T21:22:29.9430211Z * [new branch] ca_fix_0431d47eaa -> origin/ca_fix_0431d47eaa 2025-08-14T21:22:29.9431279Z * [new branch] camyll/revert-94bc900da97ad7f3c35b3b819bb53b23c74b581a-for-release-2.8 -> origin/camyll/revert-94bc900da97ad7f3c35b3b819bb53b23c74b581a-for-release-2.8 2025-08-14T21:22:29.9432172Z * [new branch] camyll/test_precommit_hooks_lintrunner -> origin/camyll/test_precommit_hooks_lintrunner 2025-08-14T21:22:29.9432873Z * [new branch] camyllh/cherrypick-151547-for-release28 -> origin/camyllh/cherrypick-151547-for-release28 2025-08-14T21:22:29.9433428Z * [new branch] camyllh/test_setup_hooks_push -> origin/camyllh/test_setup_hooks_push 2025-08-14T21:22:29.9434043Z * [new branch] cherry-pick-149654-by-pytorch_bot_bot_ -> origin/cherry-pick-149654-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9435152Z * [new branch] cherry-pick-151939-by-pytorch_bot_bot_ -> origin/cherry-pick-151939-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9436155Z * [new branch] cherry-pick-154174-by-pytorch_bot_bot_ -> origin/cherry-pick-154174-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9437355Z * [new branch] cherry-pick-155896-by-pytorch_bot_bot_ -> origin/cherry-pick-155896-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9438314Z * [new branch] cherry-pick-156260-by-pytorch_bot_bot_ -> origin/cherry-pick-156260-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9439300Z * [new branch] cherry-pick-156719-by-pytorch_bot_bot_ -> origin/cherry-pick-156719-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9440220Z * [new branch] cherry-pick-156876-by-pytorch_bot_bot_ -> origin/cherry-pick-156876-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9441416Z * [new branch] cherry-pick-156888-by-pytorch_bot_bot_ -> origin/cherry-pick-156888-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9443820Z * [new branch] cherry-pick-157014-by-pytorch_bot_bot_ -> origin/cherry-pick-157014-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9444573Z * [new branch] cherry-pick-157179-by-pytorch_bot_bot_ -> origin/cherry-pick-157179-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9445150Z * [new branch] cherry-pick-157453-by-pytorch_bot_bot_ -> origin/cherry-pick-157453-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9445725Z * [new branch] cherry-pick-157513-by-pytorch_bot_bot_ -> origin/cherry-pick-157513-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9446415Z * [new branch] cherry-pick-157558-by-pytorch_bot_bot_ -> origin/cherry-pick-157558-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9447349Z * [new branch] cherry-pick-157598-by-pytorch_bot_bot_ -> origin/cherry-pick-157598-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9448407Z * [new branch] cherry-pick-157600-by-pytorch_bot_bot_ -> origin/cherry-pick-157600-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9458209Z * [new branch] cherry-pick-157630-by-pytorch_bot_bot_ -> origin/cherry-pick-157630-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9459144Z * [new branch] cherry-pick-157695-by-pytorch_bot_bot_ -> origin/cherry-pick-157695-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9459753Z * [new branch] cherry-pick-157732-by-pytorch_bot_bot_ -> origin/cherry-pick-157732-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9460403Z * [new branch] cherry-pick-157733-by-pytorch_bot_bot_ -> origin/cherry-pick-157733-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9461018Z * [new branch] cherry-pick-157985-by-pytorch_bot_bot_ -> origin/cherry-pick-157985-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9461587Z * [new branch] cherry-pick-157993-by-pytorch_bot_bot_ -> origin/cherry-pick-157993-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9462204Z * [new branch] cherry-pick-158064-by-pytorch_bot_bot_ -> origin/cherry-pick-158064-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9462756Z * [new branch] cherry-pick-158152-by-pytorch_bot_bot_ -> origin/cherry-pick-158152-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9463399Z * [new branch] cherry-pick-158295-by-pytorch_bot_bot_ -> origin/cherry-pick-158295-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9464118Z * [new branch] cherry-pick-158301-by-pytorch_bot_bot_ -> origin/cherry-pick-158301-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9466358Z * [new branch] cherry-pick-158537-by-pytorch_bot_bot_ -> origin/cherry-pick-158537-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9466918Z * [new branch] cherry-pick-158572-by-pytorch_bot_bot_ -> origin/cherry-pick-158572-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9467467Z * [new branch] cherry-pick-158595 -> origin/cherry-pick-158595 2025-08-14T21:22:29.9467956Z * [new branch] cherry-pick-159181-by-pytorch_bot_bot_ -> origin/cherry-pick-159181-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9468600Z * [new branch] cherry-pick-159969-by-pytorch_bot_bot_ -> origin/cherry-pick-159969-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9469424Z * [new branch] cherry-pick-160586-by-pytorch_bot_bot_ -> origin/cherry-pick-160586-by-pytorch_bot_bot_ 2025-08-14T21:22:29.9470391Z * [new branch] cherry-pick-PR-158746 -> origin/cherry-pick-PR-158746 2025-08-14T21:22:29.9471553Z * [new branch] cherrypick-e4e2701429c17078c3c475382a8b1fa4c8a8cefc -> origin/cherrypick-e4e2701429c17078c3c475382a8b1fa4c8a8cefc 2025-08-14T21:22:29.9472697Z * [new branch] chilli/flex_vllm -> origin/chilli/flex_vllm 2025-08-14T21:22:29.9473742Z * [new branch] ckluk2-compileThread-1 -> origin/ckluk2-compileThread-1 2025-08-14T21:22:29.9474589Z * [new branch] ckluk2-compileThread-2 -> origin/ckluk2-compileThread-2 2025-08-14T21:22:29.9475500Z * [new branch] ckluk2-compileThread-64 -> origin/ckluk2-compileThread-64 2025-08-14T21:22:29.9476907Z * [new branch] ckluk2-test-1 -> origin/ckluk2-test-1 2025-08-14T21:22:29.9477859Z * [new branch] cleantest1 -> origin/cleantest1 2025-08-14T21:22:29.9483495Z * [new branch] codex-testing -> origin/codex-testing 2025-08-14T21:22:29.9487567Z * [new branch] codex/create-test-for-tensor-memory-leak-in-cudagraph -> origin/codex/create-test-for-tensor-memory-leak-in-cudagraph 2025-08-14T21:22:29.9488327Z * [new branch] codex/fix-issue-121219-in-pytorch -> origin/codex/fix-issue-121219-in-pytorch 2025-08-14T21:22:29.9488852Z * [new branch] codex/fix-issue-160415-in-pytorch -> origin/codex/fix-issue-160415-in-pytorch 2025-08-14T21:22:29.9489483Z * [new branch] codex/fix-noqengine-quantized-engine-support -> origin/codex/fix-noqengine-quantized-engine-support 2025-08-14T21:22:29.9490138Z * [new branch] codex/fix-pin_memory-error-handling -> origin/codex/fix-pin_memory-error-handling 2025-08-14T21:22:29.9490704Z * [new branch] codex/propose-fix-for-issue-160332 -> origin/codex/propose-fix-for-issue-160332 2025-08-14T21:22:29.9491498Z * [new branch] codex/refactor-lintrunner-config-to-use-uv-run -> origin/codex/refactor-lintrunner-config-to-use-uv-run 2025-08-14T21:22:29.9492352Z * [new branch] codex/verify-torch-output-and-log-results -> origin/codex/verify-torch-output-and-log-results 2025-08-14T21:22:29.9499488Z * [new branch] compile_fsdp2_disable_stream_and_event -> origin/compile_fsdp2_disable_stream_and_event 2025-08-14T21:22:29.9500053Z * [new branch] comply-with-setuptools -> origin/comply-with-setuptools 2025-08-14T21:22:29.9500465Z * [new branch] context_test -> origin/context_test 2025-08-14T21:22:29.9500859Z * [new branch] copilot/fix-157446 -> origin/copilot/fix-157446 2025-08-14T21:22:29.9501308Z * [new branch] copilot/fix-159257 -> origin/copilot/fix-159257 2025-08-14T21:22:29.9501691Z * [new branch] copy_graph -> origin/copy_graph 2025-08-14T21:22:29.9502169Z * [new branch] cpio/fix_new_ami_tests -> origin/cpio/fix_new_ami_tests 2025-08-14T21:22:29.9502559Z * [new branch] csl/3_proc_sm -> origin/csl/3_proc_sm 2025-08-14T21:22:29.9503064Z * [new branch] csl/add_file_merge_conflict_csv -> origin/csl/add_file_merge_conflict_csv 2025-08-14T21:22:29.9503521Z * [new branch] csl/always_produce_xml -> origin/csl/always_produce_xml 2025-08-14T21:22:29.9504076Z * [new branch] csl/build_test_more_procs -> origin/csl/build_test_more_procs 2025-08-14T21:22:29.9504991Z * [new branch] csl/build_test_more_procs2 -> origin/csl/build_test_more_procs2 2025-08-14T21:22:29.9506224Z * [new branch] csl/disable_flaky_cpp_test -> origin/csl/disable_flaky_cpp_test 2025-08-14T21:22:29.9507282Z * [new branch] csl/disable_periodic_test -> origin/csl/disable_periodic_test 2025-08-14T21:22:29.9512694Z * [new branch] csl/executorch_docker_fail -> origin/csl/executorch_docker_fail 2025-08-14T21:22:29.9513526Z * [new branch] csl/fix_check_alerts -> origin/csl/fix_check_alerts 2025-08-14T21:22:29.9514425Z * [new branch] csl/katex -> origin/csl/katex 2025-08-14T21:22:29.9515360Z * [new branch] csl/larger_runner -> origin/csl/larger_runner 2025-08-14T21:22:29.9516485Z * [new branch] csl/lintrunner_changed_files_removed -> origin/csl/lintrunner_changed_files_removed 2025-08-14T21:22:29.9517442Z * [new branch] csl/lintrunner_changed_files_removed_test -> origin/csl/lintrunner_changed_files_removed_test 2025-08-14T21:22:29.9518273Z * [new branch] csl/lintrunner_stuff -> origin/csl/lintrunner_stuff 2025-08-14T21:22:29.9519158Z * [new branch] csl/mps_sharding -> origin/csl/mps_sharding 2025-08-14T21:22:29.9520249Z * [new branch] csl/multistage_docker -> origin/csl/multistage_docker 2025-08-14T21:22:29.9521179Z * [new branch] csl/no_keep_goin_rocm -> origin/csl/no_keep_goin_rocm 2025-08-14T21:22:29.9526291Z * [new branch] csl/not_600_timeout -> origin/csl/not_600_timeout 2025-08-14T21:22:29.9526733Z * [new branch] csl/remove_unused_docker_images -> origin/csl/remove_unused_docker_images 2025-08-14T21:22:29.9527220Z * [new branch] csl/revert_open -> origin/csl/revert_open 2025-08-14T21:22:29.9527708Z * [new branch] csl/rocm_upload_artifacts_while_running -> origin/csl/rocm_upload_artifacts_while_running 2025-08-14T21:22:29.9528262Z * [new branch] csl/skip_build -> origin/csl/skip_build 2025-08-14T21:22:29.9528633Z * [new branch] csl/td_dynamo -> origin/csl/td_dynamo 2025-08-14T21:22:29.9529108Z * [new branch] csl/test_cuda_build_large_runner -> origin/csl/test_cuda_build_large_runner 2025-08-14T21:22:29.9529630Z * [new branch] csl/unused_docker -> origin/csl/unused_docker 2025-08-14T21:22:29.9530070Z * [new branch] csl/win_sccache -> origin/csl/win_sccache 2025-08-14T21:22:29.9530844Z * [new branch] cublasltrelax2 -> origin/cublasltrelax2 2025-08-14T21:22:29.9531731Z * [new branch] cublasrelax2 -> origin/cublasrelax2 2025-08-14T21:22:29.9532673Z * [new branch] cudnnsdparefactor -> origin/cudnnsdparefactor 2025-08-14T21:22:29.9533745Z * [new branch] custom_lowering_dict -> origin/custom_lowering_dict 2025-08-14T21:22:29.9534608Z * [new branch] czhuge_muon_dev -> origin/czhuge_muon_dev 2025-08-14T21:22:29.9535967Z * [new branch] d4l3k/delete_hook -> origin/d4l3k/delete_hook 2025-08-14T21:22:29.9545571Z * [new branch] d4l3k/dist_queue -> origin/d4l3k/dist_queue 2025-08-14T21:22:29.9546512Z * [new branch] d4l3k/wait_stream -> origin/d4l3k/wait_stream 2025-08-14T21:22:29.9547556Z * [new branch] dcp-safetensor-test-fix -> origin/dcp-safetensor-test-fix 2025-08-14T21:22:29.9548423Z * [new branch] dcp_zoc -> origin/dcp_zoc 2025-08-14T21:22:29.9549928Z * [new branch] delete-quant-docs -> origin/delete-quant-docs 2025-08-14T21:22:29.9555378Z * [new branch] dependabot/pip/dot-ci/docker/protobuf-5.29.5 -> origin/dependabot/pip/dot-ci/docker/protobuf-5.29.5 2025-08-14T21:22:29.9555962Z * [new branch] desertfire/test_cpp_wrapper -> origin/desertfire/test_cpp_wrapper 2025-08-14T21:22:29.9556485Z * [new branch] desertfire/triton-cpu-for-aarch64 -> origin/desertfire/triton-cpu-for-aarch64 2025-08-14T21:22:29.9557041Z * [new branch] dev/joona/MPSNDArrayAdd -> origin/dev/joona/MPSNDArrayAdd 2025-08-14T21:22:29.9557798Z * [new branch] dev/joona/Unranked -> origin/dev/joona/Unranked 2025-08-14T21:22:29.9559056Z * [new branch] dev/joona/cat -> origin/dev/joona/cat 2025-08-14T21:22:29.9560156Z * [new branch] dev/joona/cat_remove_graph -> origin/dev/joona/cat_remove_graph 2025-08-14T21:22:29.9561023Z * [new branch] dev/joona/embeddingbag -> origin/dev/joona/embeddingbag 2025-08-14T21:22:29.9562382Z * [new branch] dev/joona/getTensorsString -> origin/dev/joona/getTensorsString 2025-08-14T21:22:29.9563723Z * [new branch] dev/joona/maxpool2dwithindices_errmsg -> origin/dev/joona/maxpool2dwithindices_errmsg 2025-08-14T21:22:29.9565044Z * [new branch] dev/joona/mps_linear_macos14 -> origin/dev/joona/mps_linear_macos14 2025-08-14T21:22:29.9569980Z * [new branch] dev/joona/sdpa -> origin/dev/joona/sdpa 2025-08-14T21:22:29.9570448Z * [new branch] dev/joona/synchronize_benchmark -> origin/dev/joona/synchronize_benchmark 2025-08-14T21:22:29.9570932Z * [new branch] dev/joona/topk_newapi -> origin/dev/joona/topk_newapi 2025-08-14T21:22:29.9571430Z * [new branch] dev/joona/type_inf -> origin/dev/joona/type_inf 2025-08-14T21:22:29.9571823Z * [new branch] dev/joona/upsize3d -> origin/dev/joona/upsize3d 2025-08-14T21:22:29.9572269Z * [new branch] disable -> origin/disable 2025-08-14T21:22:29.9573560Z * [new branch] divyanshk-log-api-usage-datapipes-1 -> origin/divyanshk-log-api-usage-datapipes-1 2025-08-14T21:22:29.9574507Z * [new branch] e2e-baseline -> origin/e2e-baseline 2025-08-14T21:22:29.9576005Z * [new branch] embg/test_inductor_ci_128B -> origin/embg/test_inductor_ci_128B 2025-08-14T21:22:29.9576896Z * [new branch] embg/test_inductor_ci_base -> origin/embg/test_inductor_ci_base 2025-08-14T21:22:29.9577974Z * [new branch] embg/test_inductor_ci_control -> origin/embg/test_inductor_ci_control 2025-08-14T21:22:29.9578731Z * [new branch] embg/triton_l2_prefetch_128B -> origin/embg/triton_l2_prefetch_128B 2025-08-14T21:22:29.9588385Z * [new branch] embg/triton_l2_prefetch_256B -> origin/embg/triton_l2_prefetch_256B 2025-08-14T21:22:29.9588980Z * [new branch] enable-b200-benchmark -> origin/enable-b200-benchmark 2025-08-14T21:22:29.9589514Z * [new branch] eqy-patch-1 -> origin/eqy-patch-1 2025-08-14T21:22:29.9589893Z * [new branch] eqy-patch-10 -> origin/eqy-patch-10 2025-08-14T21:22:29.9590787Z * [new branch] eqy-patch-2 -> origin/eqy-patch-2 2025-08-14T21:22:29.9591764Z * [new branch] example-convert-torch.nn -> origin/example-convert-torch.nn 2025-08-14T21:22:29.9593191Z * [new branch] exclamaforte/amd-ma -> origin/exclamaforte/amd-ma 2025-08-14T21:22:29.9594194Z * [new branch] exclamaforte/bump-transformer-version -> origin/exclamaforte/bump-transformer-version 2025-08-14T21:22:29.9603258Z * [new branch] exclamaforte/combo-kernels-perf-run -> origin/exclamaforte/combo-kernels-perf-run 2025-08-14T21:22:29.9604050Z * [new branch] exclamaforte/debug-autotuner-profile -> origin/exclamaforte/debug-autotuner-profile 2025-08-14T21:22:29.9604782Z * [new branch] exclamaforte/do_bench_refactor -> origin/exclamaforte/do_bench_refactor 2025-08-14T21:22:29.9605488Z * [new branch] exclamaforte/enable-mem-dep-fusion -> origin/exclamaforte/enable-mem-dep-fusion 2025-08-14T21:22:29.9606161Z * [new branch] exclamaforte/fix-exhaustive-autotuning -> origin/exclamaforte/fix-exhaustive-autotuning 2025-08-14T21:22:29.9606765Z * [new branch] exclamaforte/fix-trace-parsing-fx-svg -> origin/exclamaforte/fix-trace-parsing-fx-svg 2025-08-14T21:22:29.9607392Z * [new branch] exclamaforte/force-pointwise-cat-perf-run -> origin/exclamaforte/force-pointwise-cat-perf-run 2025-08-14T21:22:29.9608027Z * [new branch] exclamaforte/fusion-data -> origin/exclamaforte/fusion-data 2025-08-14T21:22:29.9608547Z * [new branch] exclamaforte/gemm-benchmark-run -> origin/exclamaforte/gemm-benchmark-run 2025-08-14T21:22:29.9613312Z * [new branch] exclamaforte/gemm-export-model -> origin/exclamaforte/gemm-export-model 2025-08-14T21:22:29.9613792Z * [new branch] exclamaforte/gemm-model -> origin/exclamaforte/gemm-model 2025-08-14T21:22:29.9614348Z * [new branch] exclamaforte/gemm-model-all-data-collection -> origin/exclamaforte/gemm-model-all-data-collection 2025-08-14T21:22:29.9614918Z * [new branch] exclamaforte/gemm-to-amd -> origin/exclamaforte/gemm-to-amd 2025-08-14T21:22:29.9615387Z * [new branch] exclamaforte/just-gemm-model -> origin/exclamaforte/just-gemm-model 2025-08-14T21:22:29.9617380Z * [new branch] exclamaforte/just-gemm-model-no-refactor -> origin/exclamaforte/just-gemm-model-no-refactor 2025-08-14T21:22:29.9617943Z * [new branch] exclamaforte/memory-counter -> origin/exclamaforte/memory-counter 2025-08-14T21:22:29.9618457Z * [new branch] exclamaforte/scheduler-refactor -> origin/exclamaforte/scheduler-refactor 2025-08-14T21:22:29.9618989Z * [new branch] exclamaforte/test_cpp_wrapper_mode -> origin/exclamaforte/test_cpp_wrapper_mode 2025-08-14T21:22:29.9619559Z * [new branch] exclamaforte/update-autotune-configs -> origin/exclamaforte/update-autotune-configs 2025-08-14T21:22:29.9620161Z * [new branch] exclamaforte/update-autotune-configs-2 -> origin/exclamaforte/update-autotune-configs-2 2025-08-14T21:22:29.9620777Z * [new branch] exclamaforte/update-pandas-numpy-ci -> origin/exclamaforte/update-pandas-numpy-ci 2025-08-14T21:22:29.9621364Z * [new branch] exclamforte/gemm-model-final -> origin/exclamforte/gemm-model-final 2025-08-14T21:22:29.9621794Z * [new branch] exec -> origin/exec 2025-08-14T21:22:29.9622678Z * [new branch] experimental-mosaic -> origin/experimental-mosaic 2025-08-14T21:22:29.9630030Z * [new branch] export-D58091437 -> origin/export-D58091437 2025-08-14T21:22:29.9630543Z * [new branch] export-D61047529 -> origin/export-D61047529 2025-08-14T21:22:29.9631033Z * [new branch] export-D68846308 -> origin/export-D68846308 2025-08-14T21:22:29.9631517Z * [new branch] export-D70112642 -> origin/export-D70112642 2025-08-14T21:22:29.9632013Z * [new branch] export-D71412006 -> origin/export-D71412006 2025-08-14T21:22:29.9632489Z * [new branch] export-D72483950 -> origin/export-D72483950 2025-08-14T21:22:29.9632989Z * [new branch] export-D73042989 -> origin/export-D73042989 2025-08-14T21:22:29.9633482Z * [new branch] export-D73287751 -> origin/export-D73287751 2025-08-14T21:22:29.9633881Z * [new branch] export-D75183591 -> origin/export-D75183591 2025-08-14T21:22:29.9634255Z * [new branch] export-D75605373 -> origin/export-D75605373 2025-08-14T21:22:29.9634624Z * [new branch] export-D75617432 -> origin/export-D75617432 2025-08-14T21:22:29.9634982Z * [new branch] export-D75659965 -> origin/export-D75659965 2025-08-14T21:22:29.9635707Z * [new branch] export-D76080931 -> origin/export-D76080931 2025-08-14T21:22:29.9636763Z * [new branch] export-D76463347 -> origin/export-D76463347 2025-08-14T21:22:29.9637685Z * [new branch] export-D76797250 -> origin/export-D76797250 2025-08-14T21:22:29.9646542Z * [new branch] export-D76885271 -> origin/export-D76885271 2025-08-14T21:22:29.9647110Z * [new branch] export-D76885620 -> origin/export-D76885620 2025-08-14T21:22:29.9647588Z * [new branch] export-D76936623 -> origin/export-D76936623 2025-08-14T21:22:29.9648073Z * [new branch] export-D76958268 -> origin/export-D76958268 2025-08-14T21:22:29.9648544Z * [new branch] export-D78047846 -> origin/export-D78047846 2025-08-14T21:22:29.9649311Z * [new branch] export-D78308105 -> origin/export-D78308105 2025-08-14T21:22:29.9649681Z * [new branch] export-D78363609 -> origin/export-D78363609 2025-08-14T21:22:29.9650169Z * [new branch] export-D78375400 -> origin/export-D78375400 2025-08-14T21:22:29.9651255Z * [new branch] export-D78431075 -> origin/export-D78431075 2025-08-14T21:22:29.9652094Z * [new branch] export-D78431305 -> origin/export-D78431305 2025-08-14T21:22:29.9656668Z * [new branch] export-D78458745 -> origin/export-D78458745 2025-08-14T21:22:29.9657052Z * [new branch] export-D78524147 -> origin/export-D78524147 2025-08-14T21:22:29.9657416Z * [new branch] export-D78580107 -> origin/export-D78580107 2025-08-14T21:22:29.9657786Z * [new branch] export-D78588406 -> origin/export-D78588406 2025-08-14T21:22:29.9658162Z * [new branch] export-D78691422 -> origin/export-D78691422 2025-08-14T21:22:29.9658532Z * [new branch] export-D78758466 -> origin/export-D78758466 2025-08-14T21:22:29.9659458Z * [new branch] export-D78822171 -> origin/export-D78822171 2025-08-14T21:22:29.9660475Z * [new branch] export-D78822351 -> origin/export-D78822351 2025-08-14T21:22:29.9687828Z * [new branch] export-D78822507 -> origin/export-D78822507 2025-08-14T21:22:29.9688598Z * [new branch] export-D78826994 -> origin/export-D78826994 2025-08-14T21:22:29.9689013Z * [new branch] export-D78894142 -> origin/export-D78894142 2025-08-14T21:22:29.9689407Z * [new branch] export-D78894324 -> origin/export-D78894324 2025-08-14T21:22:29.9689786Z * [new branch] export-D78907485 -> origin/export-D78907485 2025-08-14T21:22:29.9690173Z * [new branch] export-D78929245 -> origin/export-D78929245 2025-08-14T21:22:29.9690690Z * [new branch] export-D78934925 -> origin/export-D78934925 2025-08-14T21:22:29.9691069Z * [new branch] export-D78953203 -> origin/export-D78953203 2025-08-14T21:22:29.9691449Z * [new branch] export-D78953229 -> origin/export-D78953229 2025-08-14T21:22:29.9691828Z * [new branch] export-D78957093 -> origin/export-D78957093 2025-08-14T21:22:29.9692215Z * [new branch] export-D78957389 -> origin/export-D78957389 2025-08-14T21:22:29.9692586Z * [new branch] export-D78957974 -> origin/export-D78957974 2025-08-14T21:22:29.9692965Z * [new branch] export-D78979812 -> origin/export-D78979812 2025-08-14T21:22:29.9693340Z * [new branch] export-D78996107 -> origin/export-D78996107 2025-08-14T21:22:29.9693717Z * [new branch] export-D79026433 -> origin/export-D79026433 2025-08-14T21:22:29.9694085Z * [new branch] export-D79230339 -> origin/export-D79230339 2025-08-14T21:22:29.9694458Z * [new branch] export-D79319835 -> origin/export-D79319835 2025-08-14T21:22:29.9694833Z * [new branch] export-D79328456 -> origin/export-D79328456 2025-08-14T21:22:29.9695201Z * [new branch] export-D79534608 -> origin/export-D79534608 2025-08-14T21:22:29.9695652Z * [new branch] export-D79647167 -> origin/export-D79647167 2025-08-14T21:22:29.9696223Z * [new branch] export-D79751098 -> origin/export-D79751098 2025-08-14T21:22:29.9696593Z * [new branch] export-D79785974 -> origin/export-D79785974 2025-08-14T21:22:29.9696959Z * [new branch] export-D80025417 -> origin/export-D80025417 2025-08-14T21:22:29.9697340Z * [new branch] export-D80120333 -> origin/export-D80120333 2025-08-14T21:22:29.9697714Z * [new branch] export-D80214882 -> origin/export-D80214882 2025-08-14T21:22:29.9698191Z * [new branch] exported-model-train-idempotent -> origin/exported-model-train-idempotent 2025-08-14T21:22:29.9698709Z * [new branch] ezyang/wip-aot-descriptors -> origin/ezyang/wip-aot-descriptors 2025-08-14T21:22:29.9699137Z * [new branch] fa_u8_brgemm -> origin/fa_u8_brgemm 2025-08-14T21:22:29.9699525Z * [new branch] fastmath_baseline -> origin/fastmath_baseline 2025-08-14T21:22:29.9705386Z * [new branch] fbcode/warm -> origin/fbcode/warm 2025-08-14T21:22:29.9706461Z * [new branch] fca -> origin/fca 2025-08-14T21:22:29.9707374Z * [new branch] fca2_ca5984c -> origin/fca2_ca5984c 2025-08-14T21:22:29.9708390Z * [new branch] fca5 -> origin/fca5 2025-08-14T21:22:29.9709770Z * [new branch] feature/function-numa-binding -> origin/feature/function-numa-binding 2025-08-14T21:22:29.9714816Z * [new branch] fengyuan/external-proj -> origin/fengyuan/external-proj 2025-08-14T21:22:29.9715393Z * [new branch] fengyuan/out-of-tree-xpu-ops-improve-test -> origin/fengyuan/out-of-tree-xpu-ops-improve-test 2025-08-14T21:22:29.9716123Z * [new branch] fengyuan/out-of-tree-xpu-ops-remove-dtype -> origin/fengyuan/out-of-tree-xpu-ops-remove-dtype 2025-08-14T21:22:29.9716667Z * [new branch] fengyuan/test-xpu -> origin/fengyuan/test-xpu 2025-08-14T21:22:29.9717698Z * [new branch] ffast_math_baseline -> origin/ffast_math_baseline 2025-08-14T21:22:29.9718614Z * [new branch] ffast_math_target -> origin/ffast_math_target 2025-08-14T21:22:29.9720144Z * [new branch] findhao/base_commit -> origin/findhao/base_commit 2025-08-14T21:22:29.9721119Z * [new branch] findhao/base_commit1 -> origin/findhao/base_commit1 2025-08-14T21:22:29.9722139Z * [new branch] findhao/fix-indirect-access -> origin/findhao/fix-indirect-access 2025-08-14T21:22:29.9722960Z * [new branch] findhao/multistream2 -> origin/findhao/multistream2 2025-08-14T21:22:29.9723855Z * [new branch] findhao/multistream5 -> origin/findhao/multistream5 2025-08-14T21:22:29.9724747Z * [new branch] findhao/multistream6 -> origin/findhao/multistream6 2025-08-14T21:22:29.9733869Z * [new branch] findhao/operatorbench3 -> origin/findhao/operatorbench3 2025-08-14T21:22:29.9734540Z * [new branch] findhao/operatorbench5 -> origin/findhao/operatorbench5 2025-08-14T21:22:29.9735099Z * [new branch] findhao/tritonparse -> origin/findhao/tritonparse 2025-08-14T21:22:29.9735649Z * [new branch] fix -> origin/fix 2025-08-14T21:22:29.9736238Z * [new branch] fix-ck-gemm-template-format -> origin/fix-ck-gemm-template-format 2025-08-14T21:22:29.9736726Z * [new branch] fix-config-ignore -> origin/fix-config-ignore 2025-08-14T21:22:29.9737111Z * [new branch] fix-dict-guard -> origin/fix-dict-guard 2025-08-14T21:22:29.9737583Z * [new branch] fix-distributed-warning -> origin/fix-distributed-warning 2025-08-14T21:22:29.9738067Z * [new branch] fix-inductor-periodic-0528 -> origin/fix-inductor-periodic-0528 2025-08-14T21:22:29.9738697Z * [new branch] fix-rlease-feature-template -> origin/fix-rlease-feature-template 2025-08-14T21:22:29.9739170Z * [new branch] fix_153389 -> origin/fix_153389 2025-08-14T21:22:29.9739630Z * [new branch] fixes-triage -> origin/fixes-triage 2025-08-14T21:22:29.9740027Z * [new branch] flash_decoding_cpu -> origin/flash_decoding_cpu 2025-08-14T21:22:29.9740411Z * [new branch] flex-flash -> origin/flex-flash 2025-08-14T21:22:29.9748032Z * [new branch] flex-lowering -> origin/flex-lowering 2025-08-14T21:22:29.9748530Z * [new branch] flex-warning -> origin/flex-warning 2025-08-14T21:22:29.9749292Z * [new branch] flex_attention_functorch_grad -> origin/flex_attention_functorch_grad 2025-08-14T21:22:29.9749732Z * [new branch] flex_flash -> origin/flex_flash 2025-08-14T21:22:29.9750168Z * [new branch] fmassa/fix_memeff_sharding_rule -> origin/fmassa/fix_memeff_sharding_rule 2025-08-14T21:22:29.9750684Z * [new branch] fmassa/try_fix_ac_tag_propagation -> origin/fmassa/try_fix_ac_tag_propagation 2025-08-14T21:22:29.9751457Z * [new branch] fsdp2_trace_rules -> origin/fsdp2_trace_rules 2025-08-14T21:22:29.9752416Z * [new branch] fsdpv2_3d -> origin/fsdpv2_3d 2025-08-14T21:22:29.9753624Z * [new branch] fsdpv2_3d_m1 -> origin/fsdpv2_3d_m1 2025-08-14T21:22:29.9756057Z * [new branch] fx_cpp -> origin/fx_cpp 2025-08-14T21:22:29.9756404Z * [new branch] fy/fix-win -> origin/fy/fix-win 2025-08-14T21:22:29.9758496Z * [new branch] gh/AlnisM/1/base -> origin/gh/AlnisM/1/base 2025-08-14T21:22:29.9759518Z * [new branch] gh/AlnisM/1/head -> origin/gh/AlnisM/1/head 2025-08-14T21:22:29.9761026Z * [new branch] gh/CaoE/2/base -> origin/gh/CaoE/2/base 2025-08-14T21:22:29.9762026Z * [new branch] gh/CaoE/2/head -> origin/gh/CaoE/2/head 2025-08-14T21:22:29.9763008Z * [new branch] gh/CaoE/2/orig -> origin/gh/CaoE/2/orig 2025-08-14T21:22:29.9764918Z * [new branch] gh/ColinPeppler/72/base -> origin/gh/ColinPeppler/72/base 2025-08-14T21:22:29.9766201Z * [new branch] gh/ColinPeppler/72/head -> origin/gh/ColinPeppler/72/head 2025-08-14T21:22:29.9767094Z * [new branch] gh/ColinPeppler/72/orig -> origin/gh/ColinPeppler/72/orig 2025-08-14T21:22:29.9776655Z * [new branch] gh/ColinPeppler/77/base -> origin/gh/ColinPeppler/77/base 2025-08-14T21:22:29.9777238Z * [new branch] gh/ColinPeppler/77/head -> origin/gh/ColinPeppler/77/head 2025-08-14T21:22:29.9777817Z * [new branch] gh/ColinPeppler/77/orig -> origin/gh/ColinPeppler/77/orig 2025-08-14T21:22:29.9778396Z * [new branch] gh/ColinPeppler/78/base -> origin/gh/ColinPeppler/78/base 2025-08-14T21:22:29.9778820Z * [new branch] gh/ColinPeppler/78/head -> origin/gh/ColinPeppler/78/head 2025-08-14T21:22:29.9779256Z * [new branch] gh/ColinPeppler/78/orig -> origin/gh/ColinPeppler/78/orig 2025-08-14T21:22:29.9779744Z * [new branch] gh/EikanWang/67/base -> origin/gh/EikanWang/67/base 2025-08-14T21:22:29.9780681Z * [new branch] gh/EikanWang/67/head -> origin/gh/EikanWang/67/head 2025-08-14T21:22:29.9782020Z * [new branch] gh/EikanWang/80/base -> origin/gh/EikanWang/80/base 2025-08-14T21:22:29.9787198Z * [new branch] gh/EikanWang/80/head -> origin/gh/EikanWang/80/head 2025-08-14T21:22:29.9787615Z * [new branch] gh/EikanWang/80/orig -> origin/gh/EikanWang/80/orig 2025-08-14T21:22:29.9788108Z * [new branch] gh/EikanWang/81/base -> origin/gh/EikanWang/81/base 2025-08-14T21:22:29.9788500Z * [new branch] gh/EikanWang/81/head -> origin/gh/EikanWang/81/head 2025-08-14T21:22:29.9788907Z * [new branch] gh/EikanWang/81/orig -> origin/gh/EikanWang/81/orig 2025-08-14T21:22:29.9789339Z * [new branch] gh/Gasoonjia/1/base -> origin/gh/Gasoonjia/1/base 2025-08-14T21:22:29.9790617Z * [new branch] gh/Gasoonjia/1/head -> origin/gh/Gasoonjia/1/head 2025-08-14T21:22:29.9792045Z * [new branch] gh/H-Huang/131/base -> origin/gh/H-Huang/131/base 2025-08-14T21:22:29.9792926Z * [new branch] gh/H-Huang/131/head -> origin/gh/H-Huang/131/head 2025-08-14T21:22:29.9793851Z * [new branch] gh/H-Huang/131/orig -> origin/gh/H-Huang/131/orig 2025-08-14T21:22:29.9795325Z * [new branch] gh/H-Huang/132/base -> origin/gh/H-Huang/132/base 2025-08-14T21:22:29.9796187Z * [new branch] gh/H-Huang/132/head -> origin/gh/H-Huang/132/head 2025-08-14T21:22:29.9797175Z * [new branch] gh/H-Huang/132/orig -> origin/gh/H-Huang/132/orig 2025-08-14T21:22:29.9805665Z * [new branch] gh/H-Huang/180/base -> origin/gh/H-Huang/180/base 2025-08-14T21:22:29.9806177Z * [new branch] gh/H-Huang/180/head -> origin/gh/H-Huang/180/head 2025-08-14T21:22:29.9806558Z * [new branch] gh/H-Huang/180/orig -> origin/gh/H-Huang/180/orig 2025-08-14T21:22:29.9806937Z * [new branch] gh/H-Huang/182/base -> origin/gh/H-Huang/182/base 2025-08-14T21:22:29.9807311Z * [new branch] gh/H-Huang/182/head -> origin/gh/H-Huang/182/head 2025-08-14T21:22:29.9807982Z * [new branch] gh/H-Huang/182/orig -> origin/gh/H-Huang/182/orig 2025-08-14T21:22:29.9809502Z * [new branch] gh/H-Huang/183/base -> origin/gh/H-Huang/183/base 2025-08-14T21:22:29.9810303Z * [new branch] gh/H-Huang/183/head -> origin/gh/H-Huang/183/head 2025-08-14T21:22:29.9811231Z * [new branch] gh/H-Huang/183/orig -> origin/gh/H-Huang/183/orig 2025-08-14T21:22:29.9816226Z * [new branch] gh/H-Huang/187/base -> origin/gh/H-Huang/187/base 2025-08-14T21:22:29.9816605Z * [new branch] gh/H-Huang/187/head -> origin/gh/H-Huang/187/head 2025-08-14T21:22:29.9816977Z * [new branch] gh/H-Huang/187/orig -> origin/gh/H-Huang/187/orig 2025-08-14T21:22:29.9817350Z * [new branch] gh/H-Huang/192/base -> origin/gh/H-Huang/192/base 2025-08-14T21:22:29.9817756Z * [new branch] gh/H-Huang/192/head -> origin/gh/H-Huang/192/head 2025-08-14T21:22:29.9818161Z * [new branch] gh/H-Huang/192/orig -> origin/gh/H-Huang/192/orig 2025-08-14T21:22:29.9819319Z * [new branch] gh/H-Huang/195/base -> origin/gh/H-Huang/195/base 2025-08-14T21:22:29.9820324Z * [new branch] gh/H-Huang/195/head -> origin/gh/H-Huang/195/head 2025-08-14T21:22:29.9821195Z * [new branch] gh/H-Huang/195/orig -> origin/gh/H-Huang/195/orig 2025-08-14T21:22:29.9822722Z * [new branch] gh/H-Huang/196/base -> origin/gh/H-Huang/196/base 2025-08-14T21:22:29.9823565Z * [new branch] gh/H-Huang/196/head -> origin/gh/H-Huang/196/head 2025-08-14T21:22:29.9824483Z * [new branch] gh/H-Huang/196/orig -> origin/gh/H-Huang/196/orig 2025-08-14T21:22:29.9826056Z * [new branch] gh/H-Huang/197/base -> origin/gh/H-Huang/197/base 2025-08-14T21:22:29.9831605Z * [new branch] gh/H-Huang/197/head -> origin/gh/H-Huang/197/head 2025-08-14T21:22:29.9832659Z * [new branch] gh/H-Huang/197/orig -> origin/gh/H-Huang/197/orig 2025-08-14T21:22:29.9834071Z * [new branch] gh/H-Huang/198/base -> origin/gh/H-Huang/198/base 2025-08-14T21:22:29.9835201Z * [new branch] gh/H-Huang/198/head -> origin/gh/H-Huang/198/head 2025-08-14T21:22:29.9836037Z * [new branch] gh/H-Huang/198/orig -> origin/gh/H-Huang/198/orig 2025-08-14T21:22:29.9837308Z * [new branch] gh/H-Huang/199/base -> origin/gh/H-Huang/199/base 2025-08-14T21:22:29.9838177Z * [new branch] gh/H-Huang/199/head -> origin/gh/H-Huang/199/head 2025-08-14T21:22:29.9839110Z * [new branch] gh/H-Huang/199/orig -> origin/gh/H-Huang/199/orig 2025-08-14T21:22:29.9846825Z * [new branch] gh/H-Huang/200/base -> origin/gh/H-Huang/200/base 2025-08-14T21:22:29.9847224Z * [new branch] gh/H-Huang/200/head -> origin/gh/H-Huang/200/head 2025-08-14T21:22:29.9847611Z * [new branch] gh/H-Huang/200/orig -> origin/gh/H-Huang/200/orig 2025-08-14T21:22:29.9847999Z * [new branch] gh/H-Huang/201/base -> origin/gh/H-Huang/201/base 2025-08-14T21:22:29.9848383Z * [new branch] gh/H-Huang/201/head -> origin/gh/H-Huang/201/head 2025-08-14T21:22:29.9849076Z * [new branch] gh/H-Huang/201/orig -> origin/gh/H-Huang/201/orig 2025-08-14T21:22:29.9849467Z * [new branch] gh/H-Huang/202/base -> origin/gh/H-Huang/202/base 2025-08-14T21:22:29.9849860Z * [new branch] gh/H-Huang/202/head -> origin/gh/H-Huang/202/head 2025-08-14T21:22:29.9850238Z * [new branch] gh/H-Huang/202/orig -> origin/gh/H-Huang/202/orig 2025-08-14T21:22:29.9850681Z * [new branch] gh/H-Huang/203/base -> origin/gh/H-Huang/203/base 2025-08-14T21:22:29.9851593Z * [new branch] gh/H-Huang/203/head -> origin/gh/H-Huang/203/head 2025-08-14T21:22:29.9852623Z * [new branch] gh/H-Huang/203/orig -> origin/gh/H-Huang/203/orig 2025-08-14T21:22:29.9854296Z * [new branch] gh/H-Huang/204/base -> origin/gh/H-Huang/204/base 2025-08-14T21:22:29.9855236Z * [new branch] gh/H-Huang/204/head -> origin/gh/H-Huang/204/head 2025-08-14T21:22:29.9864725Z * [new branch] gh/H-Huang/204/orig -> origin/gh/H-Huang/204/orig 2025-08-14T21:22:29.9866050Z * [new branch] gh/H-Huang/205/base -> origin/gh/H-Huang/205/base 2025-08-14T21:22:29.9866942Z * [new branch] gh/H-Huang/205/head -> origin/gh/H-Huang/205/head 2025-08-14T21:22:29.9867901Z * [new branch] gh/H-Huang/205/orig -> origin/gh/H-Huang/205/orig 2025-08-14T21:22:29.9869180Z * [new branch] gh/H-Huang/206/base -> origin/gh/H-Huang/206/base 2025-08-14T21:22:29.9874133Z * [new branch] gh/H-Huang/206/head -> origin/gh/H-Huang/206/head 2025-08-14T21:22:29.9874527Z * [new branch] gh/H-Huang/206/orig -> origin/gh/H-Huang/206/orig 2025-08-14T21:22:29.9875212Z * [new branch] gh/H-Huang/207/base -> origin/gh/H-Huang/207/base 2025-08-14T21:22:29.9876254Z * [new branch] gh/H-Huang/207/head -> origin/gh/H-Huang/207/head 2025-08-14T21:22:29.9877199Z * [new branch] gh/H-Huang/207/orig -> origin/gh/H-Huang/207/orig 2025-08-14T21:22:29.9878578Z * [new branch] gh/H-Huang/208/base -> origin/gh/H-Huang/208/base 2025-08-14T21:22:29.9879448Z * [new branch] gh/H-Huang/208/head -> origin/gh/H-Huang/208/head 2025-08-14T21:22:29.9880349Z * [new branch] gh/H-Huang/208/orig -> origin/gh/H-Huang/208/orig 2025-08-14T21:22:29.9881822Z * [new branch] gh/H-Huang/209/base -> origin/gh/H-Huang/209/base 2025-08-14T21:22:29.9882719Z * [new branch] gh/H-Huang/209/head -> origin/gh/H-Huang/209/head 2025-08-14T21:22:29.9883686Z * [new branch] gh/H-Huang/209/orig -> origin/gh/H-Huang/209/orig 2025-08-14T21:22:29.9885827Z * [new branch] gh/IvanKobzarev/107/base -> origin/gh/IvanKobzarev/107/base 2025-08-14T21:22:29.9886966Z * [new branch] gh/IvanKobzarev/107/head -> origin/gh/IvanKobzarev/107/head 2025-08-14T21:22:29.9887869Z * [new branch] gh/IvanKobzarev/107/orig -> origin/gh/IvanKobzarev/107/orig 2025-08-14T21:22:29.9889264Z * [new branch] gh/IvanKobzarev/110/base -> origin/gh/IvanKobzarev/110/base 2025-08-14T21:22:29.9890134Z * [new branch] gh/IvanKobzarev/110/head -> origin/gh/IvanKobzarev/110/head 2025-08-14T21:22:29.9891055Z * [new branch] gh/IvanKobzarev/110/orig -> origin/gh/IvanKobzarev/110/orig 2025-08-14T21:22:29.9892481Z * [new branch] gh/IvanKobzarev/111/base -> origin/gh/IvanKobzarev/111/base 2025-08-14T21:22:29.9893408Z * [new branch] gh/IvanKobzarev/111/head -> origin/gh/IvanKobzarev/111/head 2025-08-14T21:22:29.9894308Z * [new branch] gh/IvanKobzarev/111/orig -> origin/gh/IvanKobzarev/111/orig 2025-08-14T21:22:29.9895689Z * [new branch] gh/IvanKobzarev/112/base -> origin/gh/IvanKobzarev/112/base 2025-08-14T21:22:29.9896625Z * [new branch] gh/IvanKobzarev/112/head -> origin/gh/IvanKobzarev/112/head 2025-08-14T21:22:29.9897604Z * [new branch] gh/IvanKobzarev/112/orig -> origin/gh/IvanKobzarev/112/orig 2025-08-14T21:22:29.9907475Z * [new branch] gh/IvanKobzarev/115/base -> origin/gh/IvanKobzarev/115/base 2025-08-14T21:22:29.9908049Z * [new branch] gh/IvanKobzarev/115/head -> origin/gh/IvanKobzarev/115/head 2025-08-14T21:22:29.9908612Z * [new branch] gh/IvanKobzarev/115/orig -> origin/gh/IvanKobzarev/115/orig 2025-08-14T21:22:29.9909052Z * [new branch] gh/IvanKobzarev/116/base -> origin/gh/IvanKobzarev/116/base 2025-08-14T21:22:29.9909577Z * [new branch] gh/IvanKobzarev/116/head -> origin/gh/IvanKobzarev/116/head 2025-08-14T21:22:29.9910001Z * [new branch] gh/IvanKobzarev/116/orig -> origin/gh/IvanKobzarev/116/orig 2025-08-14T21:22:29.9910433Z * [new branch] gh/IvanKobzarev/118/base -> origin/gh/IvanKobzarev/118/base 2025-08-14T21:22:29.9911129Z * [new branch] gh/IvanKobzarev/118/head -> origin/gh/IvanKobzarev/118/head 2025-08-14T21:22:29.9912048Z * [new branch] gh/IvanKobzarev/118/orig -> origin/gh/IvanKobzarev/118/orig 2025-08-14T21:22:29.9915678Z * [new branch] gh/IvanKobzarev/124/base -> origin/gh/IvanKobzarev/124/base 2025-08-14T21:22:29.9916116Z * [new branch] gh/IvanKobzarev/124/head -> origin/gh/IvanKobzarev/124/head 2025-08-14T21:22:29.9916554Z * [new branch] gh/IvanKobzarev/124/orig -> origin/gh/IvanKobzarev/124/orig 2025-08-14T21:22:29.9917123Z * [new branch] gh/IvanKobzarev/126/base -> origin/gh/IvanKobzarev/126/base 2025-08-14T21:22:29.9918234Z * [new branch] gh/IvanKobzarev/126/head -> origin/gh/IvanKobzarev/126/head 2025-08-14T21:22:29.9918992Z * [new branch] gh/IvanKobzarev/126/orig -> origin/gh/IvanKobzarev/126/orig 2025-08-14T21:22:29.9920443Z * [new branch] gh/IvanKobzarev/127/base -> origin/gh/IvanKobzarev/127/base 2025-08-14T21:22:29.9921446Z * [new branch] gh/IvanKobzarev/127/head -> origin/gh/IvanKobzarev/127/head 2025-08-14T21:22:29.9922433Z * [new branch] gh/IvanKobzarev/127/orig -> origin/gh/IvanKobzarev/127/orig 2025-08-14T21:22:29.9923809Z * [new branch] gh/IvanKobzarev/128/base -> origin/gh/IvanKobzarev/128/base 2025-08-14T21:22:29.9924740Z * [new branch] gh/IvanKobzarev/128/head -> origin/gh/IvanKobzarev/128/head 2025-08-14T21:22:29.9925666Z * [new branch] gh/IvanKobzarev/128/orig -> origin/gh/IvanKobzarev/128/orig 2025-08-14T21:22:29.9927095Z * [new branch] gh/IvanKobzarev/129/base -> origin/gh/IvanKobzarev/129/base 2025-08-14T21:22:29.9931964Z * [new branch] gh/IvanKobzarev/129/head -> origin/gh/IvanKobzarev/129/head 2025-08-14T21:22:29.9936744Z * [new branch] gh/IvanKobzarev/129/orig -> origin/gh/IvanKobzarev/129/orig 2025-08-14T21:22:29.9937334Z * [new branch] gh/IvanKobzarev/130/base -> origin/gh/IvanKobzarev/130/base 2025-08-14T21:22:29.9937777Z * [new branch] gh/IvanKobzarev/130/head -> origin/gh/IvanKobzarev/130/head 2025-08-14T21:22:29.9938205Z * [new branch] gh/IvanKobzarev/130/orig -> origin/gh/IvanKobzarev/130/orig 2025-08-14T21:22:29.9938634Z * [new branch] gh/IvanKobzarev/131/base -> origin/gh/IvanKobzarev/131/base 2025-08-14T21:22:29.9939068Z * [new branch] gh/IvanKobzarev/131/head -> origin/gh/IvanKobzarev/131/head 2025-08-14T21:22:29.9940023Z * [new branch] gh/IvanKobzarev/131/orig -> origin/gh/IvanKobzarev/131/orig 2025-08-14T21:22:29.9941434Z * [new branch] gh/IvanKobzarev/132/base -> origin/gh/IvanKobzarev/132/base 2025-08-14T21:22:29.9946765Z * [new branch] gh/IvanKobzarev/132/head -> origin/gh/IvanKobzarev/132/head 2025-08-14T21:22:29.9947194Z * [new branch] gh/IvanKobzarev/132/orig -> origin/gh/IvanKobzarev/132/orig 2025-08-14T21:22:29.9947625Z * [new branch] gh/IvanKobzarev/133/base -> origin/gh/IvanKobzarev/133/base 2025-08-14T21:22:29.9948065Z * [new branch] gh/IvanKobzarev/133/head -> origin/gh/IvanKobzarev/133/head 2025-08-14T21:22:29.9948498Z * [new branch] gh/IvanKobzarev/133/orig -> origin/gh/IvanKobzarev/133/orig 2025-08-14T21:22:29.9949392Z * [new branch] gh/IvanKobzarev/134/base -> origin/gh/IvanKobzarev/134/base 2025-08-14T21:22:29.9950626Z * [new branch] gh/IvanKobzarev/134/head -> origin/gh/IvanKobzarev/134/head 2025-08-14T21:22:29.9951365Z * [new branch] gh/IvanKobzarev/134/orig -> origin/gh/IvanKobzarev/134/orig 2025-08-14T21:22:29.9952883Z * [new branch] gh/IvanKobzarev/135/base -> origin/gh/IvanKobzarev/135/base 2025-08-14T21:22:29.9953834Z * [new branch] gh/IvanKobzarev/135/head -> origin/gh/IvanKobzarev/135/head 2025-08-14T21:22:29.9954752Z * [new branch] gh/IvanKobzarev/135/orig -> origin/gh/IvanKobzarev/135/orig 2025-08-14T21:22:29.9956409Z * [new branch] gh/NikhilAPatel/1/base -> origin/gh/NikhilAPatel/1/base 2025-08-14T21:22:29.9961913Z * [new branch] gh/NikhilAPatel/1/head -> origin/gh/NikhilAPatel/1/head 2025-08-14T21:22:29.9963220Z * [new branch] gh/NikhilAPatel/16/base -> origin/gh/NikhilAPatel/16/base 2025-08-14T21:22:29.9964125Z * [new branch] gh/NikhilAPatel/16/head -> origin/gh/NikhilAPatel/16/head 2025-08-14T21:22:29.9965113Z * [new branch] gh/NikhilAPatel/16/orig -> origin/gh/NikhilAPatel/16/orig 2025-08-14T21:22:29.9966507Z * [new branch] gh/NikhilAPatel/18/base -> origin/gh/NikhilAPatel/18/base 2025-08-14T21:22:29.9967419Z * [new branch] gh/NikhilAPatel/18/head -> origin/gh/NikhilAPatel/18/head 2025-08-14T21:22:29.9968332Z * [new branch] gh/NikhilAPatel/18/orig -> origin/gh/NikhilAPatel/18/orig 2025-08-14T21:22:29.9969651Z * [new branch] gh/NikhilAPatel/19/base -> origin/gh/NikhilAPatel/19/base 2025-08-14T21:22:29.9970862Z * [new branch] gh/NikhilAPatel/19/head -> origin/gh/NikhilAPatel/19/head 2025-08-14T21:22:29.9975766Z * [new branch] gh/NikhilAPatel/19/orig -> origin/gh/NikhilAPatel/19/orig 2025-08-14T21:22:29.9976194Z * [new branch] gh/NikhilAPatel/2/base -> origin/gh/NikhilAPatel/2/base 2025-08-14T21:22:29.9976624Z * [new branch] gh/NikhilAPatel/2/head -> origin/gh/NikhilAPatel/2/head 2025-08-14T21:22:29.9977147Z * [new branch] gh/NikhilAPatel/4/base -> origin/gh/NikhilAPatel/4/base 2025-08-14T21:22:29.9977585Z * [new branch] gh/NikhilAPatel/4/head -> origin/gh/NikhilAPatel/4/head 2025-08-14T21:22:29.9978027Z * [new branch] gh/NikhilAPatel/8/base -> origin/gh/NikhilAPatel/8/base 2025-08-14T21:22:29.9978931Z * [new branch] gh/NikhilAPatel/8/head -> origin/gh/NikhilAPatel/8/head 2025-08-14T21:22:29.9979823Z * [new branch] gh/NikhilAPatel/8/orig -> origin/gh/NikhilAPatel/8/orig 2025-08-14T21:22:29.9981310Z * [new branch] gh/NikhilAPatel/9/base -> origin/gh/NikhilAPatel/9/base 2025-08-14T21:22:29.9982263Z * [new branch] gh/NikhilAPatel/9/head -> origin/gh/NikhilAPatel/9/head 2025-08-14T21:22:29.9984638Z * [new branch] gh/NikhilAPatel/9/orig -> origin/gh/NikhilAPatel/9/orig 2025-08-14T21:22:29.9985227Z * [new branch] gh/PaliC/1/base -> origin/gh/PaliC/1/base 2025-08-14T21:22:29.9985671Z * [new branch] gh/PaliC/1/head -> origin/gh/PaliC/1/head 2025-08-14T21:22:29.9995019Z * [new branch] gh/PaliC/1/orig -> origin/gh/PaliC/1/orig 2025-08-14T21:22:29.9996566Z * [new branch] gh/PaliC/12/base -> origin/gh/PaliC/12/base 2025-08-14T21:22:29.9997658Z * [new branch] gh/PaliC/12/head -> origin/gh/PaliC/12/head 2025-08-14T21:22:29.9998681Z * [new branch] gh/PaliC/12/orig -> origin/gh/PaliC/12/orig 2025-08-14T21:22:30.0000067Z * [new branch] gh/PaliC/13/base -> origin/gh/PaliC/13/base 2025-08-14T21:22:30.0004808Z * [new branch] gh/PaliC/13/head -> origin/gh/PaliC/13/head 2025-08-14T21:22:30.0005305Z * [new branch] gh/PaliC/13/orig -> origin/gh/PaliC/13/orig 2025-08-14T21:22:30.0005763Z * [new branch] gh/PaliC/14/base -> origin/gh/PaliC/14/base 2025-08-14T21:22:30.0006141Z * [new branch] gh/PaliC/14/head -> origin/gh/PaliC/14/head 2025-08-14T21:22:30.0006521Z * [new branch] gh/PaliC/14/orig -> origin/gh/PaliC/14/orig 2025-08-14T21:22:30.0006897Z * [new branch] gh/PaliC/15/base -> origin/gh/PaliC/15/base 2025-08-14T21:22:30.0007554Z * [new branch] gh/PaliC/15/head -> origin/gh/PaliC/15/head 2025-08-14T21:22:30.0008478Z * [new branch] gh/PaliC/15/orig -> origin/gh/PaliC/15/orig 2025-08-14T21:22:30.0009742Z * [new branch] gh/PaliC/16/base -> origin/gh/PaliC/16/base 2025-08-14T21:22:30.0010608Z * [new branch] gh/PaliC/16/head -> origin/gh/PaliC/16/head 2025-08-14T21:22:30.0011620Z * [new branch] gh/PaliC/16/orig -> origin/gh/PaliC/16/orig 2025-08-14T21:22:30.0012888Z * [new branch] gh/PaliC/17/base -> origin/gh/PaliC/17/base 2025-08-14T21:22:30.0013741Z * [new branch] gh/PaliC/17/head -> origin/gh/PaliC/17/head 2025-08-14T21:22:30.0014723Z * [new branch] gh/PaliC/17/orig -> origin/gh/PaliC/17/orig 2025-08-14T21:22:30.0016230Z * [new branch] gh/PaliC/18/base -> origin/gh/PaliC/18/base 2025-08-14T21:22:30.0017107Z * [new branch] gh/PaliC/18/head -> origin/gh/PaliC/18/head 2025-08-14T21:22:30.0018112Z * [new branch] gh/PaliC/18/orig -> origin/gh/PaliC/18/orig 2025-08-14T21:22:30.0019347Z * [new branch] gh/PaliC/19/base -> origin/gh/PaliC/19/base 2025-08-14T21:22:30.0020218Z * [new branch] gh/PaliC/19/head -> origin/gh/PaliC/19/head 2025-08-14T21:22:30.0021134Z * [new branch] gh/PaliC/19/orig -> origin/gh/PaliC/19/orig 2025-08-14T21:22:30.0022429Z * [new branch] gh/PaliC/2/base -> origin/gh/PaliC/2/base 2025-08-14T21:22:30.0023301Z * [new branch] gh/PaliC/2/head -> origin/gh/PaliC/2/head 2025-08-14T21:22:30.0024407Z * [new branch] gh/PaliC/2/orig -> origin/gh/PaliC/2/orig 2025-08-14T21:22:30.0025582Z * [new branch] gh/PaliC/20/base -> origin/gh/PaliC/20/base 2025-08-14T21:22:30.0026459Z * [new branch] gh/PaliC/20/head -> origin/gh/PaliC/20/head 2025-08-14T21:22:30.0027366Z * [new branch] gh/PaliC/20/orig -> origin/gh/PaliC/20/orig 2025-08-14T21:22:30.0028670Z * [new branch] gh/PaliC/21/base -> origin/gh/PaliC/21/base 2025-08-14T21:22:30.0037911Z * [new branch] gh/PaliC/21/head -> origin/gh/PaliC/21/head 2025-08-14T21:22:30.0038396Z * [new branch] gh/PaliC/21/orig -> origin/gh/PaliC/21/orig 2025-08-14T21:22:30.0038891Z * [new branch] gh/PaliC/22/base -> origin/gh/PaliC/22/base 2025-08-14T21:22:30.0039393Z * [new branch] gh/PaliC/22/head -> origin/gh/PaliC/22/head 2025-08-14T21:22:30.0040157Z * [new branch] gh/PaliC/22/orig -> origin/gh/PaliC/22/orig 2025-08-14T21:22:30.0041496Z * [new branch] gh/PaliC/23/base -> origin/gh/PaliC/23/base 2025-08-14T21:22:30.0042434Z * [new branch] gh/PaliC/23/head -> origin/gh/PaliC/23/head 2025-08-14T21:22:30.0043335Z * [new branch] gh/PaliC/23/orig -> origin/gh/PaliC/23/orig 2025-08-14T21:22:30.0045003Z * [new branch] gh/PaliC/24/base -> origin/gh/PaliC/24/base 2025-08-14T21:22:30.0045928Z * [new branch] gh/PaliC/24/head -> origin/gh/PaliC/24/head 2025-08-14T21:22:30.0046901Z * [new branch] gh/PaliC/24/orig -> origin/gh/PaliC/24/orig 2025-08-14T21:22:30.0048599Z * [new branch] gh/PaulZhang12/17/base -> origin/gh/PaulZhang12/17/base 2025-08-14T21:22:30.0049858Z * [new branch] gh/PaulZhang12/17/head -> origin/gh/PaulZhang12/17/head 2025-08-14T21:22:30.0051255Z * [new branch] gh/PaulZhang12/18/base -> origin/gh/PaulZhang12/18/base 2025-08-14T21:22:30.0052329Z * [new branch] gh/PaulZhang12/18/head -> origin/gh/PaulZhang12/18/head 2025-08-14T21:22:30.0053225Z * [new branch] gh/PaulZhang12/18/orig -> origin/gh/PaulZhang12/18/orig 2025-08-14T21:22:30.0054665Z * [new branch] gh/PaulZhang12/19/base -> origin/gh/PaulZhang12/19/base 2025-08-14T21:22:30.0055736Z * [new branch] gh/PaulZhang12/19/head -> origin/gh/PaulZhang12/19/head 2025-08-14T21:22:30.0056622Z * [new branch] gh/PaulZhang12/19/orig -> origin/gh/PaulZhang12/19/orig 2025-08-14T21:22:30.0062257Z * [new branch] gh/PaulZhang12/20/base -> origin/gh/PaulZhang12/20/base 2025-08-14T21:22:30.0066769Z * [new branch] gh/PaulZhang12/20/head -> origin/gh/PaulZhang12/20/head 2025-08-14T21:22:30.0067221Z * [new branch] gh/PaulZhang12/20/orig -> origin/gh/PaulZhang12/20/orig 2025-08-14T21:22:30.0067651Z * [new branch] gh/PaulZhang12/21/base -> origin/gh/PaulZhang12/21/base 2025-08-14T21:22:30.0068060Z * [new branch] gh/PaulZhang12/21/head -> origin/gh/PaulZhang12/21/head 2025-08-14T21:22:30.0068479Z * [new branch] gh/PaulZhang12/21/orig -> origin/gh/PaulZhang12/21/orig 2025-08-14T21:22:30.0068903Z * [new branch] gh/PaulZhang12/22/base -> origin/gh/PaulZhang12/22/base 2025-08-14T21:22:30.0069740Z * [new branch] gh/PaulZhang12/22/head -> origin/gh/PaulZhang12/22/head 2025-08-14T21:22:30.0070686Z * [new branch] gh/PaulZhang12/22/orig -> origin/gh/PaulZhang12/22/orig 2025-08-14T21:22:30.0072720Z * [new branch] gh/SamGinzburg/11/base -> origin/gh/SamGinzburg/11/base 2025-08-14T21:22:30.0079285Z * [new branch] gh/SamGinzburg/11/head -> origin/gh/SamGinzburg/11/head 2025-08-14T21:22:30.0079848Z * [new branch] gh/Sidharth123-cpu/24/base -> origin/gh/Sidharth123-cpu/24/base 2025-08-14T21:22:30.0080294Z * [new branch] gh/Sidharth123-cpu/25/base -> origin/gh/Sidharth123-cpu/25/base 2025-08-14T21:22:30.0080736Z * [new branch] gh/Sidharth123-cpu/26/base -> origin/gh/Sidharth123-cpu/26/base 2025-08-14T21:22:30.0081263Z * [new branch] gh/Sidharth123-cpu/27/base -> origin/gh/Sidharth123-cpu/27/base 2025-08-14T21:22:30.0081700Z * [new branch] gh/Sidharth123-cpu/42/base -> origin/gh/Sidharth123-cpu/42/base 2025-08-14T21:22:30.0082128Z * [new branch] gh/Sidharth123-cpu/42/head -> origin/gh/Sidharth123-cpu/42/head 2025-08-14T21:22:30.0082937Z * [new branch] gh/Sidharth123-cpu/42/orig -> origin/gh/Sidharth123-cpu/42/orig 2025-08-14T21:22:30.0084278Z * [new branch] gh/Sidharth123-cpu/43/base -> origin/gh/Sidharth123-cpu/43/base 2025-08-14T21:22:30.0085232Z * [new branch] gh/Sidharth123-cpu/43/head -> origin/gh/Sidharth123-cpu/43/head 2025-08-14T21:22:30.0086216Z * [new branch] gh/Sidharth123-cpu/43/orig -> origin/gh/Sidharth123-cpu/43/orig 2025-08-14T21:22:30.0091465Z * [new branch] gh/Sidharth123-cpu/44/base -> origin/gh/Sidharth123-cpu/44/base 2025-08-14T21:22:30.0093187Z * [new branch] gh/Sidharth123-cpu/44/head -> origin/gh/Sidharth123-cpu/44/head 2025-08-14T21:22:30.0094062Z * [new branch] gh/Sidharth123-cpu/44/orig -> origin/gh/Sidharth123-cpu/44/orig 2025-08-14T21:22:30.0095349Z * [new branch] gh/Sidharth123-cpu/45/base -> origin/gh/Sidharth123-cpu/45/base 2025-08-14T21:22:30.0096210Z * [new branch] gh/Sidharth123-cpu/45/head -> origin/gh/Sidharth123-cpu/45/head 2025-08-14T21:22:30.0097277Z * [new branch] gh/Sidharth123-cpu/45/orig -> origin/gh/Sidharth123-cpu/45/orig 2025-08-14T21:22:30.0098863Z * [new branch] gh/StrongerXi/1/base -> origin/gh/StrongerXi/1/base 2025-08-14T21:22:30.0099754Z * [new branch] gh/StrongerXi/1/head -> origin/gh/StrongerXi/1/head 2025-08-14T21:22:30.0101062Z * [new branch] gh/StrongerXi/103/base -> origin/gh/StrongerXi/103/base 2025-08-14T21:22:30.0106241Z * [new branch] gh/StrongerXi/103/head -> origin/gh/StrongerXi/103/head 2025-08-14T21:22:30.0106662Z * [new branch] gh/StrongerXi/103/orig -> origin/gh/StrongerXi/103/orig 2025-08-14T21:22:30.0107081Z * [new branch] gh/StrongerXi/133/base -> origin/gh/StrongerXi/133/base 2025-08-14T21:22:30.0107490Z * [new branch] gh/StrongerXi/133/head -> origin/gh/StrongerXi/133/head 2025-08-14T21:22:30.0107903Z * [new branch] gh/StrongerXi/133/orig -> origin/gh/StrongerXi/133/orig 2025-08-14T21:22:30.0108360Z * [new branch] gh/StrongerXi/134/base -> origin/gh/StrongerXi/134/base 2025-08-14T21:22:30.0108783Z * [new branch] gh/StrongerXi/134/head -> origin/gh/StrongerXi/134/head 2025-08-14T21:22:30.0109681Z * [new branch] gh/StrongerXi/134/orig -> origin/gh/StrongerXi/134/orig 2025-08-14T21:22:30.0110760Z * [new branch] gh/StrongerXi/135/base -> origin/gh/StrongerXi/135/base 2025-08-14T21:22:30.0111715Z * [new branch] gh/StrongerXi/135/head -> origin/gh/StrongerXi/135/head 2025-08-14T21:22:30.0112741Z * [new branch] gh/StrongerXi/135/orig -> origin/gh/StrongerXi/135/orig 2025-08-14T21:22:30.0113867Z * [new branch] gh/StrongerXi/136/base -> origin/gh/StrongerXi/136/base 2025-08-14T21:22:30.0114767Z * [new branch] gh/StrongerXi/136/head -> origin/gh/StrongerXi/136/head 2025-08-14T21:22:30.0115669Z * [new branch] gh/StrongerXi/136/orig -> origin/gh/StrongerXi/136/orig 2025-08-14T21:22:30.0117078Z * [new branch] gh/StrongerXi/137/base -> origin/gh/StrongerXi/137/base 2025-08-14T21:22:30.0124767Z * [new branch] gh/StrongerXi/137/head -> origin/gh/StrongerXi/137/head 2025-08-14T21:22:30.0125281Z * [new branch] gh/StrongerXi/137/orig -> origin/gh/StrongerXi/137/orig 2025-08-14T21:22:30.0125696Z * [new branch] gh/StrongerXi/138/base -> origin/gh/StrongerXi/138/base 2025-08-14T21:22:30.0126110Z * [new branch] gh/StrongerXi/138/head -> origin/gh/StrongerXi/138/head 2025-08-14T21:22:30.0126525Z * [new branch] gh/StrongerXi/138/orig -> origin/gh/StrongerXi/138/orig 2025-08-14T21:22:30.0127636Z * [new branch] gh/StrongerXi/71/base -> origin/gh/StrongerXi/71/base 2025-08-14T21:22:30.0128467Z * [new branch] gh/StrongerXi/71/head -> origin/gh/StrongerXi/71/head 2025-08-14T21:22:30.0129699Z * [new branch] gh/StrongerXi/72/base -> origin/gh/StrongerXi/72/base 2025-08-14T21:22:30.0132644Z * [new branch] gh/StrongerXi/72/head -> origin/gh/StrongerXi/72/head 2025-08-14T21:22:30.0135673Z * [new branch] gh/XilunWu/131/base -> origin/gh/XilunWu/131/base 2025-08-14T21:22:30.0136219Z * [new branch] gh/XilunWu/131/head -> origin/gh/XilunWu/131/head 2025-08-14T21:22:30.0136814Z * [new branch] gh/XilunWu/131/orig -> origin/gh/XilunWu/131/orig 2025-08-14T21:22:30.0137205Z * [new branch] gh/XilunWu/133/base -> origin/gh/XilunWu/133/base 2025-08-14T21:22:30.0137766Z * [new branch] gh/XilunWu/133/head -> origin/gh/XilunWu/133/head 2025-08-14T21:22:30.0138160Z * [new branch] gh/XilunWu/133/orig -> origin/gh/XilunWu/133/orig 2025-08-14T21:22:30.0139152Z * [new branch] gh/XilunWu/136/base -> origin/gh/XilunWu/136/base 2025-08-14T21:22:30.0140153Z * [new branch] gh/XilunWu/136/head -> origin/gh/XilunWu/136/head 2025-08-14T21:22:30.0140985Z * [new branch] gh/XilunWu/136/orig -> origin/gh/XilunWu/136/orig 2025-08-14T21:22:30.0142319Z * [new branch] gh/XilunWu/139/base -> origin/gh/XilunWu/139/base 2025-08-14T21:22:30.0143238Z * [new branch] gh/XilunWu/139/head -> origin/gh/XilunWu/139/head 2025-08-14T21:22:30.0144154Z * [new branch] gh/XilunWu/139/orig -> origin/gh/XilunWu/139/orig 2025-08-14T21:22:30.0145715Z * [new branch] gh/XilunWu/143/base -> origin/gh/XilunWu/143/base 2025-08-14T21:22:30.0155393Z * [new branch] gh/XilunWu/143/head -> origin/gh/XilunWu/143/head 2025-08-14T21:22:30.0156351Z * [new branch] gh/XilunWu/143/orig -> origin/gh/XilunWu/143/orig 2025-08-14T21:22:30.0157713Z * [new branch] gh/XilunWu/144/base -> origin/gh/XilunWu/144/base 2025-08-14T21:22:30.0158731Z * [new branch] gh/XilunWu/144/head -> origin/gh/XilunWu/144/head 2025-08-14T21:22:30.0159706Z * [new branch] gh/XilunWu/144/orig -> origin/gh/XilunWu/144/orig 2025-08-14T21:22:30.0164394Z * [new branch] gh/XilunWu/145/base -> origin/gh/XilunWu/145/base 2025-08-14T21:22:30.0164931Z * [new branch] gh/XilunWu/145/head -> origin/gh/XilunWu/145/head 2025-08-14T21:22:30.0165351Z * [new branch] gh/XilunWu/145/orig -> origin/gh/XilunWu/145/orig 2025-08-14T21:22:30.0166117Z * [new branch] gh/XilunWu/146/base -> origin/gh/XilunWu/146/base 2025-08-14T21:22:30.0167236Z * [new branch] gh/XilunWu/146/head -> origin/gh/XilunWu/146/head 2025-08-14T21:22:30.0168048Z * [new branch] gh/XilunWu/146/orig -> origin/gh/XilunWu/146/orig 2025-08-14T21:22:30.0169446Z * [new branch] gh/XilunWu/147/base -> origin/gh/XilunWu/147/base 2025-08-14T21:22:30.0170301Z * [new branch] gh/XilunWu/147/head -> origin/gh/XilunWu/147/head 2025-08-14T21:22:30.0171282Z * [new branch] gh/XilunWu/147/orig -> origin/gh/XilunWu/147/orig 2025-08-14T21:22:30.0172463Z * [new branch] gh/XilunWu/148/base -> origin/gh/XilunWu/148/base 2025-08-14T21:22:30.0173648Z * [new branch] gh/XilunWu/148/head -> origin/gh/XilunWu/148/head 2025-08-14T21:22:30.0174629Z * [new branch] gh/XilunWu/148/orig -> origin/gh/XilunWu/148/orig 2025-08-14T21:22:30.0175944Z * [new branch] gh/XilunWu/149/base -> origin/gh/XilunWu/149/base 2025-08-14T21:22:30.0176828Z * [new branch] gh/XilunWu/149/head -> origin/gh/XilunWu/149/head 2025-08-14T21:22:30.0177775Z * [new branch] gh/XilunWu/149/orig -> origin/gh/XilunWu/149/orig 2025-08-14T21:22:30.0178947Z * [new branch] gh/XilunWu/150/base -> origin/gh/XilunWu/150/base 2025-08-14T21:22:30.0179790Z * [new branch] gh/XilunWu/150/head -> origin/gh/XilunWu/150/head 2025-08-14T21:22:30.0180696Z * [new branch] gh/XilunWu/150/orig -> origin/gh/XilunWu/150/orig 2025-08-14T21:22:30.0182079Z * [new branch] gh/XilunWu/151/base -> origin/gh/XilunWu/151/base 2025-08-14T21:22:30.0182962Z * [new branch] gh/XilunWu/151/head -> origin/gh/XilunWu/151/head 2025-08-14T21:22:30.0183905Z * [new branch] gh/XilunWu/151/orig -> origin/gh/XilunWu/151/orig 2025-08-14T21:22:30.0185174Z * [new branch] gh/XilunWu/152/base -> origin/gh/XilunWu/152/base 2025-08-14T21:22:30.0185990Z * [new branch] gh/XilunWu/152/head -> origin/gh/XilunWu/152/head 2025-08-14T21:22:30.0186865Z * [new branch] gh/XilunWu/152/orig -> origin/gh/XilunWu/152/orig 2025-08-14T21:22:30.0188357Z * [new branch] gh/XilunWu/153/base -> origin/gh/XilunWu/153/base 2025-08-14T21:22:30.0197398Z * [new branch] gh/XilunWu/153/head -> origin/gh/XilunWu/153/head 2025-08-14T21:22:30.0197915Z * [new branch] gh/XilunWu/153/orig -> origin/gh/XilunWu/153/orig 2025-08-14T21:22:30.0198427Z * [new branch] gh/XilunWu/154/base -> origin/gh/XilunWu/154/base 2025-08-14T21:22:30.0198935Z * [new branch] gh/XilunWu/154/head -> origin/gh/XilunWu/154/head 2025-08-14T21:22:30.0199334Z * [new branch] gh/XilunWu/154/orig -> origin/gh/XilunWu/154/orig 2025-08-14T21:22:30.0199726Z * [new branch] gh/XilunWu/156/base -> origin/gh/XilunWu/156/base 2025-08-14T21:22:30.0200586Z * [new branch] gh/XilunWu/156/head -> origin/gh/XilunWu/156/head 2025-08-14T21:22:30.0201747Z * [new branch] gh/XilunWu/156/orig -> origin/gh/XilunWu/156/orig 2025-08-14T21:22:30.0203058Z * [new branch] gh/XilunWu/157/base -> origin/gh/XilunWu/157/base 2025-08-14T21:22:30.0204248Z * [new branch] gh/XilunWu/157/head -> origin/gh/XilunWu/157/head 2025-08-14T21:22:30.0205099Z * [new branch] gh/XilunWu/157/orig -> origin/gh/XilunWu/157/orig 2025-08-14T21:22:30.0206575Z * [new branch] gh/XilunWu/158/base -> origin/gh/XilunWu/158/base 2025-08-14T21:22:30.0207438Z * [new branch] gh/XilunWu/158/head -> origin/gh/XilunWu/158/head 2025-08-14T21:22:30.0208325Z * [new branch] gh/XilunWu/158/orig -> origin/gh/XilunWu/158/orig 2025-08-14T21:22:30.0209949Z * [new branch] gh/XilunWu/159/base -> origin/gh/XilunWu/159/base 2025-08-14T21:22:30.0211018Z * [new branch] gh/XilunWu/159/head -> origin/gh/XilunWu/159/head 2025-08-14T21:22:30.0211974Z * [new branch] gh/XilunWu/159/orig -> origin/gh/XilunWu/159/orig 2025-08-14T21:22:30.0213369Z * [new branch] gh/XilunWu/160/base -> origin/gh/XilunWu/160/base 2025-08-14T21:22:30.0214347Z * [new branch] gh/XilunWu/160/head -> origin/gh/XilunWu/160/head 2025-08-14T21:22:30.0215299Z * [new branch] gh/XilunWu/160/orig -> origin/gh/XilunWu/160/orig 2025-08-14T21:22:30.0216729Z * [new branch] gh/XilunWu/161/base -> origin/gh/XilunWu/161/base 2025-08-14T21:22:30.0217610Z * [new branch] gh/XilunWu/161/head -> origin/gh/XilunWu/161/head 2025-08-14T21:22:30.0226503Z * [new branch] gh/XilunWu/161/orig -> origin/gh/XilunWu/161/orig 2025-08-14T21:22:30.0227020Z * [new branch] gh/XilunWu/162/base -> origin/gh/XilunWu/162/base 2025-08-14T21:22:30.0227521Z * [new branch] gh/XilunWu/162/head -> origin/gh/XilunWu/162/head 2025-08-14T21:22:30.0228034Z * [new branch] gh/XilunWu/162/orig -> origin/gh/XilunWu/162/orig 2025-08-14T21:22:30.0228564Z * [new branch] gh/XilunWu/163/base -> origin/gh/XilunWu/163/base 2025-08-14T21:22:30.0229057Z * [new branch] gh/XilunWu/163/head -> origin/gh/XilunWu/163/head 2025-08-14T21:22:30.0229652Z * [new branch] gh/XilunWu/163/orig -> origin/gh/XilunWu/163/orig 2025-08-14T21:22:30.0231311Z * [new branch] gh/XuehaiPan/14/base -> origin/gh/XuehaiPan/14/base 2025-08-14T21:22:30.0232227Z * [new branch] gh/XuehaiPan/14/head -> origin/gh/XuehaiPan/14/head 2025-08-14T21:22:30.0238860Z * [new branch] gh/XuehaiPan/14/orig -> origin/gh/XuehaiPan/14/orig 2025-08-14T21:22:30.0239275Z * [new branch] gh/XuehaiPan/179/base -> origin/gh/XuehaiPan/179/base 2025-08-14T21:22:30.0239685Z * [new branch] gh/XuehaiPan/179/head -> origin/gh/XuehaiPan/179/head 2025-08-14T21:22:30.0240095Z * [new branch] gh/XuehaiPan/179/orig -> origin/gh/XuehaiPan/179/orig 2025-08-14T21:22:30.0240568Z * [new branch] gh/XuehaiPan/189/base -> origin/gh/XuehaiPan/189/base 2025-08-14T21:22:30.0240966Z * [new branch] gh/XuehaiPan/189/head -> origin/gh/XuehaiPan/189/head 2025-08-14T21:22:30.0241445Z * [new branch] gh/XuehaiPan/189/orig -> origin/gh/XuehaiPan/189/orig 2025-08-14T21:22:30.0241847Z * [new branch] gh/XuehaiPan/227/base -> origin/gh/XuehaiPan/227/base 2025-08-14T21:22:30.0242254Z * [new branch] gh/XuehaiPan/227/head -> origin/gh/XuehaiPan/227/head 2025-08-14T21:22:30.0243042Z * [new branch] gh/XuehaiPan/227/orig -> origin/gh/XuehaiPan/227/orig 2025-08-14T21:22:30.0244454Z * [new branch] gh/XuehaiPan/231/base -> origin/gh/XuehaiPan/231/base 2025-08-14T21:22:30.0245321Z * [new branch] gh/XuehaiPan/231/head -> origin/gh/XuehaiPan/231/head 2025-08-14T21:22:30.0246278Z * [new branch] gh/XuehaiPan/231/orig -> origin/gh/XuehaiPan/231/orig 2025-08-14T21:22:30.0255356Z * [new branch] gh/XuehaiPan/232/base -> origin/gh/XuehaiPan/232/base 2025-08-14T21:22:30.0255915Z * [new branch] gh/XuehaiPan/232/head -> origin/gh/XuehaiPan/232/head 2025-08-14T21:22:30.0256378Z * [new branch] gh/XuehaiPan/232/orig -> origin/gh/XuehaiPan/232/orig 2025-08-14T21:22:30.0256785Z * [new branch] gh/XuehaiPan/249/base -> origin/gh/XuehaiPan/249/base 2025-08-14T21:22:30.0257188Z * [new branch] gh/XuehaiPan/249/head -> origin/gh/XuehaiPan/249/head 2025-08-14T21:22:30.0257597Z * [new branch] gh/XuehaiPan/249/orig -> origin/gh/XuehaiPan/249/orig 2025-08-14T21:22:30.0258816Z * [new branch] gh/XuehaiPan/253/base -> origin/gh/XuehaiPan/253/base 2025-08-14T21:22:30.0259681Z * [new branch] gh/XuehaiPan/253/head -> origin/gh/XuehaiPan/253/head 2025-08-14T21:22:30.0260620Z * [new branch] gh/XuehaiPan/253/orig -> origin/gh/XuehaiPan/253/orig 2025-08-14T21:22:30.0265681Z * [new branch] gh/XuehaiPan/254/base -> origin/gh/XuehaiPan/254/base 2025-08-14T21:22:30.0266125Z * [new branch] gh/XuehaiPan/254/head -> origin/gh/XuehaiPan/254/head 2025-08-14T21:22:30.0266574Z * [new branch] gh/XuehaiPan/254/orig -> origin/gh/XuehaiPan/254/orig 2025-08-14T21:22:30.0267012Z * [new branch] gh/XuehaiPan/255/base -> origin/gh/XuehaiPan/255/base 2025-08-14T21:22:30.0267440Z * [new branch] gh/XuehaiPan/255/head -> origin/gh/XuehaiPan/255/head 2025-08-14T21:22:30.0267852Z * [new branch] gh/XuehaiPan/255/orig -> origin/gh/XuehaiPan/255/orig 2025-08-14T21:22:30.0268757Z * [new branch] gh/XuehaiPan/257/base -> origin/gh/XuehaiPan/257/base 2025-08-14T21:22:30.0269690Z * [new branch] gh/XuehaiPan/257/head -> origin/gh/XuehaiPan/257/head 2025-08-14T21:22:30.0270620Z * [new branch] gh/XuehaiPan/257/orig -> origin/gh/XuehaiPan/257/orig 2025-08-14T21:22:30.0271879Z * [new branch] gh/XuehaiPan/271/base -> origin/gh/XuehaiPan/271/base 2025-08-14T21:22:30.0272764Z * [new branch] gh/XuehaiPan/271/head -> origin/gh/XuehaiPan/271/head 2025-08-14T21:22:30.0273701Z * [new branch] gh/XuehaiPan/271/orig -> origin/gh/XuehaiPan/271/orig 2025-08-14T21:22:30.0274993Z * [new branch] gh/XuehaiPan/283/base -> origin/gh/XuehaiPan/283/base 2025-08-14T21:22:30.0276046Z * [new branch] gh/XuehaiPan/283/head -> origin/gh/XuehaiPan/283/head 2025-08-14T21:22:30.0281361Z * [new branch] gh/XuehaiPan/283/orig -> origin/gh/XuehaiPan/283/orig 2025-08-14T21:22:30.0282784Z * [new branch] gh/XuehaiPan/290/base -> origin/gh/XuehaiPan/290/base 2025-08-14T21:22:30.0283734Z * [new branch] gh/XuehaiPan/290/head -> origin/gh/XuehaiPan/290/head 2025-08-14T21:22:30.0284622Z * [new branch] gh/XuehaiPan/290/orig -> origin/gh/XuehaiPan/290/orig 2025-08-14T21:22:30.0285979Z * [new branch] gh/XuehaiPan/328/base -> origin/gh/XuehaiPan/328/base 2025-08-14T21:22:30.0286879Z * [new branch] gh/XuehaiPan/328/head -> origin/gh/XuehaiPan/328/head 2025-08-14T21:22:30.0287835Z * [new branch] gh/XuehaiPan/328/orig -> origin/gh/XuehaiPan/328/orig 2025-08-14T21:22:30.0289198Z * [new branch] gh/XuehaiPan/339/base -> origin/gh/XuehaiPan/339/base 2025-08-14T21:22:30.0296374Z * [new branch] gh/XuehaiPan/339/head -> origin/gh/XuehaiPan/339/head 2025-08-14T21:22:30.0296792Z * [new branch] gh/XuehaiPan/339/orig -> origin/gh/XuehaiPan/339/orig 2025-08-14T21:22:30.0297197Z * [new branch] gh/XuehaiPan/343/base -> origin/gh/XuehaiPan/343/base 2025-08-14T21:22:30.0297599Z * [new branch] gh/XuehaiPan/343/head -> origin/gh/XuehaiPan/343/head 2025-08-14T21:22:30.0298018Z * [new branch] gh/XuehaiPan/343/orig -> origin/gh/XuehaiPan/343/orig 2025-08-14T21:22:30.0298423Z * [new branch] gh/XuehaiPan/344/base -> origin/gh/XuehaiPan/344/base 2025-08-14T21:22:30.0298829Z * [new branch] gh/XuehaiPan/344/head -> origin/gh/XuehaiPan/344/head 2025-08-14T21:22:30.0299226Z * [new branch] gh/XuehaiPan/344/orig -> origin/gh/XuehaiPan/344/orig 2025-08-14T21:22:30.0299633Z * [new branch] gh/XuehaiPan/345/base -> origin/gh/XuehaiPan/345/base 2025-08-14T21:22:30.0300319Z * [new branch] gh/XuehaiPan/345/head -> origin/gh/XuehaiPan/345/head 2025-08-14T21:22:30.0301276Z * [new branch] gh/XuehaiPan/345/orig -> origin/gh/XuehaiPan/345/orig 2025-08-14T21:22:30.0310860Z * [new branch] gh/XuehaiPan/346/base -> origin/gh/XuehaiPan/346/base 2025-08-14T21:22:30.0311462Z * [new branch] gh/XuehaiPan/346/head -> origin/gh/XuehaiPan/346/head 2025-08-14T21:22:30.0311991Z * [new branch] gh/XuehaiPan/346/orig -> origin/gh/XuehaiPan/346/orig 2025-08-14T21:22:30.0314405Z * [new branch] gh/XuehaiPan/347/base -> origin/gh/XuehaiPan/347/base 2025-08-14T21:22:30.0315314Z * [new branch] gh/XuehaiPan/347/head -> origin/gh/XuehaiPan/347/head 2025-08-14T21:22:30.0316382Z * [new branch] gh/XuehaiPan/347/orig -> origin/gh/XuehaiPan/347/orig 2025-08-14T21:22:30.0317695Z * [new branch] gh/XuehaiPan/348/base -> origin/gh/XuehaiPan/348/base 2025-08-14T21:22:30.0318578Z * [new branch] gh/XuehaiPan/348/head -> origin/gh/XuehaiPan/348/head 2025-08-14T21:22:30.0323577Z * [new branch] gh/XuehaiPan/348/orig -> origin/gh/XuehaiPan/348/orig 2025-08-14T21:22:30.0323989Z * [new branch] gh/XuehaiPan/350/base -> origin/gh/XuehaiPan/350/base 2025-08-14T21:22:30.0324404Z * [new branch] gh/XuehaiPan/350/head -> origin/gh/XuehaiPan/350/head 2025-08-14T21:22:30.0324814Z * [new branch] gh/XuehaiPan/350/orig -> origin/gh/XuehaiPan/350/orig 2025-08-14T21:22:30.0326005Z * [new branch] gh/XuehaiPan/352/base -> origin/gh/XuehaiPan/352/base 2025-08-14T21:22:30.0326836Z * [new branch] gh/XuehaiPan/352/head -> origin/gh/XuehaiPan/352/head 2025-08-14T21:22:30.0327767Z * [new branch] gh/XuehaiPan/352/orig -> origin/gh/XuehaiPan/352/orig 2025-08-14T21:22:30.0329166Z * [new branch] gh/XuehaiPan/356/base -> origin/gh/XuehaiPan/356/base 2025-08-14T21:22:30.0330072Z * [new branch] gh/XuehaiPan/356/head -> origin/gh/XuehaiPan/356/head 2025-08-14T21:22:30.0330951Z * [new branch] gh/XuehaiPan/356/orig -> origin/gh/XuehaiPan/356/orig 2025-08-14T21:22:30.0332331Z * [new branch] gh/XuehaiPan/357/base -> origin/gh/XuehaiPan/357/base 2025-08-14T21:22:30.0333167Z * [new branch] gh/XuehaiPan/357/head -> origin/gh/XuehaiPan/357/head 2025-08-14T21:22:30.0334177Z * [new branch] gh/XuehaiPan/357/orig -> origin/gh/XuehaiPan/357/orig 2025-08-14T21:22:30.0335608Z * [new branch] gh/XuehaiPan/358/base -> origin/gh/XuehaiPan/358/base 2025-08-14T21:22:30.0336532Z * [new branch] gh/XuehaiPan/358/head -> origin/gh/XuehaiPan/358/head 2025-08-14T21:22:30.0337462Z * [new branch] gh/XuehaiPan/358/orig -> origin/gh/XuehaiPan/358/orig 2025-08-14T21:22:30.0338810Z * [new branch] gh/XuehaiPan/359/base -> origin/gh/XuehaiPan/359/base 2025-08-14T21:22:30.0339688Z * [new branch] gh/XuehaiPan/359/head -> origin/gh/XuehaiPan/359/head 2025-08-14T21:22:30.0340617Z * [new branch] gh/XuehaiPan/359/orig -> origin/gh/XuehaiPan/359/orig 2025-08-14T21:22:30.0341907Z * [new branch] gh/XuehaiPan/360/base -> origin/gh/XuehaiPan/360/base 2025-08-14T21:22:30.0343439Z * [new branch] gh/XuehaiPan/360/head -> origin/gh/XuehaiPan/360/head 2025-08-14T21:22:30.0344390Z * [new branch] gh/XuehaiPan/360/orig -> origin/gh/XuehaiPan/360/orig 2025-08-14T21:22:30.0345762Z * [new branch] gh/XuehaiPan/365/base -> origin/gh/XuehaiPan/365/base 2025-08-14T21:22:30.0346611Z * [new branch] gh/XuehaiPan/365/head -> origin/gh/XuehaiPan/365/head 2025-08-14T21:22:30.0347474Z * [new branch] gh/XuehaiPan/365/orig -> origin/gh/XuehaiPan/365/orig 2025-08-14T21:22:30.0357141Z * [new branch] gh/XuehaiPan/366/base -> origin/gh/XuehaiPan/366/base 2025-08-14T21:22:30.0357674Z * [new branch] gh/XuehaiPan/366/head -> origin/gh/XuehaiPan/366/head 2025-08-14T21:22:30.0358226Z * [new branch] gh/XuehaiPan/368/base -> origin/gh/XuehaiPan/368/base 2025-08-14T21:22:30.0358820Z * [new branch] gh/XuehaiPan/368/head -> origin/gh/XuehaiPan/368/head 2025-08-14T21:22:30.0359227Z * [new branch] gh/XuehaiPan/368/orig -> origin/gh/XuehaiPan/368/orig 2025-08-14T21:22:30.0359629Z * [new branch] gh/XuehaiPan/369/base -> origin/gh/XuehaiPan/369/base 2025-08-14T21:22:30.0360277Z * [new branch] gh/XuehaiPan/369/head -> origin/gh/XuehaiPan/369/head 2025-08-14T21:22:30.0361254Z * [new branch] gh/XuehaiPan/369/orig -> origin/gh/XuehaiPan/369/orig 2025-08-14T21:22:30.0362786Z * [new branch] gh/XuehaiPan/370/base -> origin/gh/XuehaiPan/370/base 2025-08-14T21:22:30.0365156Z * [new branch] gh/XuehaiPan/370/head -> origin/gh/XuehaiPan/370/head 2025-08-14T21:22:30.0365560Z * [new branch] gh/XuehaiPan/370/orig -> origin/gh/XuehaiPan/370/orig 2025-08-14T21:22:30.0366125Z * [new branch] gh/XuehaiPan/371/base -> origin/gh/XuehaiPan/371/base 2025-08-14T21:22:30.0367050Z * [new branch] gh/XuehaiPan/371/head -> origin/gh/XuehaiPan/371/head 2025-08-14T21:22:30.0368438Z * [new branch] gh/XuehaiPan/371/orig -> origin/gh/XuehaiPan/371/orig 2025-08-14T21:22:30.0369769Z * [new branch] gh/XuehaiPan/372/base -> origin/gh/XuehaiPan/372/base 2025-08-14T21:22:30.0370584Z * [new branch] gh/XuehaiPan/372/head -> origin/gh/XuehaiPan/372/head 2025-08-14T21:22:30.0371516Z * [new branch] gh/XuehaiPan/372/orig -> origin/gh/XuehaiPan/372/orig 2025-08-14T21:22:30.0372800Z * [new branch] gh/XuehaiPan/373/base -> origin/gh/XuehaiPan/373/base 2025-08-14T21:22:30.0373659Z * [new branch] gh/XuehaiPan/373/head -> origin/gh/XuehaiPan/373/head 2025-08-14T21:22:30.0374567Z * [new branch] gh/XuehaiPan/373/orig -> origin/gh/XuehaiPan/373/orig 2025-08-14T21:22:30.0375992Z * [new branch] gh/XuehaiPan/374/base -> origin/gh/XuehaiPan/374/base 2025-08-14T21:22:30.0376826Z * [new branch] gh/XuehaiPan/374/head -> origin/gh/XuehaiPan/374/head 2025-08-14T21:22:30.0382171Z * [new branch] gh/XuehaiPan/374/orig -> origin/gh/XuehaiPan/374/orig 2025-08-14T21:22:30.0386060Z * [new branch] gh/XuehaiPan/375/base -> origin/gh/XuehaiPan/375/base 2025-08-14T21:22:30.0386614Z * [new branch] gh/XuehaiPan/375/head -> origin/gh/XuehaiPan/375/head 2025-08-14T21:22:30.0387030Z * [new branch] gh/XuehaiPan/375/orig -> origin/gh/XuehaiPan/375/orig 2025-08-14T21:22:30.0387442Z * [new branch] gh/XuehaiPan/376/base -> origin/gh/XuehaiPan/376/base 2025-08-14T21:22:30.0387858Z * [new branch] gh/XuehaiPan/376/head -> origin/gh/XuehaiPan/376/head 2025-08-14T21:22:30.0388449Z * [new branch] gh/XuehaiPan/376/orig -> origin/gh/XuehaiPan/376/orig 2025-08-14T21:22:30.0389755Z * [new branch] gh/XuehaiPan/377/base -> origin/gh/XuehaiPan/377/base 2025-08-14T21:22:30.0390618Z * [new branch] gh/XuehaiPan/377/head -> origin/gh/XuehaiPan/377/head 2025-08-14T21:22:30.0391576Z * [new branch] gh/XuehaiPan/377/orig -> origin/gh/XuehaiPan/377/orig 2025-08-14T21:22:30.0396234Z * [new branch] gh/XuehaiPan/378/base -> origin/gh/XuehaiPan/378/base 2025-08-14T21:22:30.0396650Z * [new branch] gh/XuehaiPan/378/head -> origin/gh/XuehaiPan/378/head 2025-08-14T21:22:30.0397065Z * [new branch] gh/XuehaiPan/378/orig -> origin/gh/XuehaiPan/378/orig 2025-08-14T21:22:30.0397470Z * [new branch] gh/XuehaiPan/379/base -> origin/gh/XuehaiPan/379/base 2025-08-14T21:22:30.0397875Z * [new branch] gh/XuehaiPan/379/head -> origin/gh/XuehaiPan/379/head 2025-08-14T21:22:30.0398298Z * [new branch] gh/XuehaiPan/379/orig -> origin/gh/XuehaiPan/379/orig 2025-08-14T21:22:30.0399680Z * [new branch] gh/ZhiweiYan-96/39/base -> origin/gh/ZhiweiYan-96/39/base 2025-08-14T21:22:30.0400525Z * [new branch] gh/ZhiweiYan-96/39/head -> origin/gh/ZhiweiYan-96/39/head 2025-08-14T21:22:30.0401689Z * [new branch] gh/ZhiweiYan-96/39/orig -> origin/gh/ZhiweiYan-96/39/orig 2025-08-14T21:22:30.0402990Z * [new branch] gh/ZhiweiYan-96/44/base -> origin/gh/ZhiweiYan-96/44/base 2025-08-14T21:22:30.0403878Z * [new branch] gh/ZhiweiYan-96/44/head -> origin/gh/ZhiweiYan-96/44/head 2025-08-14T21:22:30.0405101Z * [new branch] gh/ZhiweiYan-96/45/base -> origin/gh/ZhiweiYan-96/45/base 2025-08-14T21:22:30.0405961Z * [new branch] gh/ZhiweiYan-96/45/head -> origin/gh/ZhiweiYan-96/45/head 2025-08-14T21:22:30.0414684Z * [new branch] gh/ZhiweiYan-96/49/base -> origin/gh/ZhiweiYan-96/49/base 2025-08-14T21:22:30.0415281Z * [new branch] gh/ZhiweiYan-96/49/head -> origin/gh/ZhiweiYan-96/49/head 2025-08-14T21:22:30.0415729Z * [new branch] gh/ZhiweiYan-96/62/base -> origin/gh/ZhiweiYan-96/62/base 2025-08-14T21:22:30.0416136Z * [new branch] gh/ZhiweiYan-96/62/head -> origin/gh/ZhiweiYan-96/62/head 2025-08-14T21:22:30.0416557Z * [new branch] gh/ZhiweiYan-96/64/base -> origin/gh/ZhiweiYan-96/64/base 2025-08-14T21:22:30.0416983Z * [new branch] gh/ZhiweiYan-96/64/head -> origin/gh/ZhiweiYan-96/64/head 2025-08-14T21:22:30.0417901Z * [new branch] gh/ZhiweiYan-96/64/orig -> origin/gh/ZhiweiYan-96/64/orig 2025-08-14T21:22:30.0419196Z * [new branch] gh/ZhiweiYan-96/65/base -> origin/gh/ZhiweiYan-96/65/base 2025-08-14T21:22:30.0420130Z * [new branch] gh/ZhiweiYan-96/65/head -> origin/gh/ZhiweiYan-96/65/head 2025-08-14T21:22:30.0425330Z * [new branch] gh/ZhiweiYan-96/65/orig -> origin/gh/ZhiweiYan-96/65/orig 2025-08-14T21:22:30.0425811Z * [new branch] gh/ZhiweiYan-96/66/base -> origin/gh/ZhiweiYan-96/66/base 2025-08-14T21:22:30.0426224Z * [new branch] gh/ZhiweiYan-96/66/head -> origin/gh/ZhiweiYan-96/66/head 2025-08-14T21:22:30.0426628Z * [new branch] gh/ZhiweiYan-96/67/base -> origin/gh/ZhiweiYan-96/67/base 2025-08-14T21:22:30.0427036Z * [new branch] gh/ZhiweiYan-96/67/head -> origin/gh/ZhiweiYan-96/67/head 2025-08-14T21:22:30.0427452Z * [new branch] gh/ZhiweiYan-96/68/base -> origin/gh/ZhiweiYan-96/68/base 2025-08-14T21:22:30.0427870Z * [new branch] gh/ZhiweiYan-96/68/head -> origin/gh/ZhiweiYan-96/68/head 2025-08-14T21:22:30.0428540Z * [new branch] gh/ZhiweiYan-96/68/orig -> origin/gh/ZhiweiYan-96/68/orig 2025-08-14T21:22:30.0430166Z * [new branch] gh/aakhundov/1/base -> origin/gh/aakhundov/1/base 2025-08-14T21:22:30.0431115Z * [new branch] gh/aakhundov/1/head -> origin/gh/aakhundov/1/head 2025-08-14T21:22:30.0432290Z * [new branch] gh/aakhundov/2/base -> origin/gh/aakhundov/2/base 2025-08-14T21:22:30.0433088Z * [new branch] gh/aakhundov/2/head -> origin/gh/aakhundov/2/head 2025-08-14T21:22:30.0434434Z * [new branch] gh/aditew01/openblas -> origin/gh/aditew01/openblas 2025-08-14T21:22:30.0435354Z * [new branch] gh/aditew01/sbgemm -> origin/gh/aditew01/sbgemm 2025-08-14T21:22:30.0445258Z * [new branch] gh/aditew01/vecbf16 -> origin/gh/aditew01/vecbf16 2025-08-14T21:22:30.0446640Z * [new branch] gh/alexbrauckmann/paddedtensor_faketensor_init -> origin/gh/alexbrauckmann/paddedtensor_faketensor_init 2025-08-14T21:22:30.0447441Z * [new branch] gh/alexbrauckmann/paddedtensor_init -> origin/gh/alexbrauckmann/paddedtensor_init 2025-08-14T21:22:30.0448367Z * [new branch] gh/alexbrauckmann/paddedtensor_meta_init -> origin/gh/alexbrauckmann/paddedtensor_meta_init 2025-08-14T21:22:30.0454228Z * [new branch] gh/alexsamardzic/7/base -> origin/gh/alexsamardzic/7/base 2025-08-14T21:22:30.0454670Z * [new branch] gh/alexsamardzic/7/head -> origin/gh/alexsamardzic/7/head 2025-08-14T21:22:30.0455097Z * [new branch] gh/alexsamardzic/7/orig -> origin/gh/alexsamardzic/7/orig 2025-08-14T21:22:30.0455525Z * [new branch] gh/alexsamardzic/8/base -> origin/gh/alexsamardzic/8/base 2025-08-14T21:22:30.0455952Z * [new branch] gh/alexsamardzic/8/head -> origin/gh/alexsamardzic/8/head 2025-08-14T21:22:30.0456374Z * [new branch] gh/alexsamardzic/8/orig -> origin/gh/alexsamardzic/8/orig 2025-08-14T21:22:30.0457128Z * [new branch] gh/amjames/18/base -> origin/gh/amjames/18/base 2025-08-14T21:22:30.0458065Z * [new branch] gh/amjames/18/head -> origin/gh/amjames/18/head 2025-08-14T21:22:30.0459130Z * [new branch] gh/amjames/18/orig -> origin/gh/amjames/18/orig 2025-08-14T21:22:30.0460804Z * [new branch] gh/andrewor14/35/base -> origin/gh/andrewor14/35/base 2025-08-14T21:22:30.0461763Z * [new branch] gh/andrewor14/35/head -> origin/gh/andrewor14/35/head 2025-08-14T21:22:30.0462891Z * [new branch] gh/andrewor14/35/orig -> origin/gh/andrewor14/35/orig 2025-08-14T21:22:30.0464280Z * [new branch] gh/andrewor14/50/base -> origin/gh/andrewor14/50/base 2025-08-14T21:22:30.0465493Z * [new branch] gh/andrewor14/50/head -> origin/gh/andrewor14/50/head 2025-08-14T21:22:30.0466395Z * [new branch] gh/andrewor14/50/orig -> origin/gh/andrewor14/50/orig 2025-08-14T21:22:30.0468462Z * [new branch] gh/andyanwang/1/base -> origin/gh/andyanwang/1/base 2025-08-14T21:22:30.0469447Z * [new branch] gh/andyanwang/1/head -> origin/gh/andyanwang/1/head 2025-08-14T21:22:30.0470246Z * [new branch] gh/andyanwang/1/orig -> origin/gh/andyanwang/1/orig 2025-08-14T21:22:30.0471743Z * [new branch] gh/andyanwang/13/base -> origin/gh/andyanwang/13/base 2025-08-14T21:22:30.0472620Z * [new branch] gh/andyanwang/13/head -> origin/gh/andyanwang/13/head 2025-08-14T21:22:30.0473579Z * [new branch] gh/andyanwang/13/orig -> origin/gh/andyanwang/13/orig 2025-08-14T21:22:30.0474884Z * [new branch] gh/andyanwang/2/base -> origin/gh/andyanwang/2/base 2025-08-14T21:22:30.0475830Z * [new branch] gh/andyanwang/2/head -> origin/gh/andyanwang/2/head 2025-08-14T21:22:30.0476808Z * [new branch] gh/andyanwang/2/orig -> origin/gh/andyanwang/2/orig 2025-08-14T21:22:30.0478145Z * [new branch] gh/andyanwang/28/base -> origin/gh/andyanwang/28/base 2025-08-14T21:22:30.0487438Z * [new branch] gh/andyanwang/28/head -> origin/gh/andyanwang/28/head 2025-08-14T21:22:30.0487992Z * [new branch] gh/andyanwang/28/orig -> origin/gh/andyanwang/28/orig 2025-08-14T21:22:30.0488530Z * [new branch] gh/andyanwang/3/base -> origin/gh/andyanwang/3/base 2025-08-14T21:22:30.0488947Z * [new branch] gh/andyanwang/3/head -> origin/gh/andyanwang/3/head 2025-08-14T21:22:30.0489579Z * [new branch] gh/andyanwang/3/orig -> origin/gh/andyanwang/3/orig 2025-08-14T21:22:30.0490983Z * [new branch] gh/andyanwang/30/base -> origin/gh/andyanwang/30/base 2025-08-14T21:22:30.0492091Z * [new branch] gh/andyanwang/30/orig -> origin/gh/andyanwang/30/orig 2025-08-14T21:22:30.0493546Z * [new branch] gh/andyanwang/31/base -> origin/gh/andyanwang/31/base 2025-08-14T21:22:30.0494736Z * [new branch] gh/andyanwang/31/orig -> origin/gh/andyanwang/31/orig 2025-08-14T21:22:30.0496487Z * [new branch] gh/andyanwang/32/base -> origin/gh/andyanwang/32/base 2025-08-14T21:22:30.0497413Z * [new branch] gh/andyanwang/32/head -> origin/gh/andyanwang/32/head 2025-08-14T21:22:30.0498501Z * [new branch] gh/andyanwang/32/orig -> origin/gh/andyanwang/32/orig 2025-08-14T21:22:30.0499824Z * [new branch] gh/andyanwang/33/base -> origin/gh/andyanwang/33/base 2025-08-14T21:22:30.0500752Z * [new branch] gh/andyanwang/33/head -> origin/gh/andyanwang/33/head 2025-08-14T21:22:30.0501659Z * [new branch] gh/andyanwang/33/orig -> origin/gh/andyanwang/33/orig 2025-08-14T21:22:30.0502951Z * [new branch] gh/andyanwang/34/base -> origin/gh/andyanwang/34/base 2025-08-14T21:22:30.0503948Z * [new branch] gh/andyanwang/34/head -> origin/gh/andyanwang/34/head 2025-08-14T21:22:30.0505058Z * [new branch] gh/andyanwang/34/orig -> origin/gh/andyanwang/34/orig 2025-08-14T21:22:30.0506476Z * [new branch] gh/andyanwang/35/base -> origin/gh/andyanwang/35/base 2025-08-14T21:22:30.0507421Z * [new branch] gh/andyanwang/35/head -> origin/gh/andyanwang/35/head 2025-08-14T21:22:30.0516255Z * [new branch] gh/andyanwang/35/orig -> origin/gh/andyanwang/35/orig 2025-08-14T21:22:30.0516670Z * [new branch] gh/andyanwang/36/base -> origin/gh/andyanwang/36/base 2025-08-14T21:22:30.0517084Z * [new branch] gh/andyanwang/36/head -> origin/gh/andyanwang/36/head 2025-08-14T21:22:30.0517498Z * [new branch] gh/andyanwang/36/orig -> origin/gh/andyanwang/36/orig 2025-08-14T21:22:30.0518112Z * [new branch] gh/andyanwang/37/base -> origin/gh/andyanwang/37/base 2025-08-14T21:22:30.0518991Z * [new branch] gh/andyanwang/37/head -> origin/gh/andyanwang/37/head 2025-08-14T21:22:30.0520106Z * [new branch] gh/andyanwang/37/orig -> origin/gh/andyanwang/37/orig 2025-08-14T21:22:30.0521380Z * [new branch] gh/andyanwang/38/base -> origin/gh/andyanwang/38/base 2025-08-14T21:22:30.0528788Z * [new branch] gh/andyanwang/38/head -> origin/gh/andyanwang/38/head 2025-08-14T21:22:30.0529332Z * [new branch] gh/andyanwang/38/orig -> origin/gh/andyanwang/38/orig 2025-08-14T21:22:30.0529857Z * [new branch] gh/andyanwang/39/base -> origin/gh/andyanwang/39/base 2025-08-14T21:22:30.0530394Z * [new branch] gh/andyanwang/39/head -> origin/gh/andyanwang/39/head 2025-08-14T21:22:30.0530917Z * [new branch] gh/andyanwang/39/orig -> origin/gh/andyanwang/39/orig 2025-08-14T21:22:30.0531455Z * [new branch] gh/andyanwang/4/base -> origin/gh/andyanwang/4/base 2025-08-14T21:22:30.0531993Z * [new branch] gh/andyanwang/4/head -> origin/gh/andyanwang/4/head 2025-08-14T21:22:30.0532476Z * [new branch] gh/andyanwang/4/orig -> origin/gh/andyanwang/4/orig 2025-08-14T21:22:30.0532871Z * [new branch] gh/andyanwang/40/base -> origin/gh/andyanwang/40/base 2025-08-14T21:22:30.0533280Z * [new branch] gh/andyanwang/40/head -> origin/gh/andyanwang/40/head 2025-08-14T21:22:30.0533691Z * [new branch] gh/andyanwang/40/orig -> origin/gh/andyanwang/40/orig 2025-08-14T21:22:30.0534844Z * [new branch] gh/angelayi/102/base -> origin/gh/angelayi/102/base 2025-08-14T21:22:30.0535785Z * [new branch] gh/angelayi/102/head -> origin/gh/angelayi/102/head 2025-08-14T21:22:30.0540936Z * [new branch] gh/angelayi/102/orig -> origin/gh/angelayi/102/orig 2025-08-14T21:22:30.0542363Z * [new branch] gh/angelayi/103/base -> origin/gh/angelayi/103/base 2025-08-14T21:22:30.0543267Z * [new branch] gh/angelayi/103/head -> origin/gh/angelayi/103/head 2025-08-14T21:22:30.0544196Z * [new branch] gh/angelayi/103/orig -> origin/gh/angelayi/103/orig 2025-08-14T21:22:30.0545543Z * [new branch] gh/angelayi/104/base -> origin/gh/angelayi/104/base 2025-08-14T21:22:30.0546457Z * [new branch] gh/angelayi/104/head -> origin/gh/angelayi/104/head 2025-08-14T21:22:30.0547375Z * [new branch] gh/angelayi/104/orig -> origin/gh/angelayi/104/orig 2025-08-14T21:22:30.0548653Z * [new branch] gh/angelayi/105/base -> origin/gh/angelayi/105/base 2025-08-14T21:22:30.0550173Z * [new branch] gh/angelayi/105/head -> origin/gh/angelayi/105/head 2025-08-14T21:22:30.0550964Z * [new branch] gh/angelayi/105/orig -> origin/gh/angelayi/105/orig 2025-08-14T21:22:30.0556046Z * [new branch] gh/angelayi/106/base -> origin/gh/angelayi/106/base 2025-08-14T21:22:30.0556801Z * [new branch] gh/angelayi/106/head -> origin/gh/angelayi/106/head 2025-08-14T21:22:30.0557301Z * [new branch] gh/angelayi/106/orig -> origin/gh/angelayi/106/orig 2025-08-14T21:22:30.0557706Z * [new branch] gh/angelayi/107/base -> origin/gh/angelayi/107/base 2025-08-14T21:22:30.0558103Z * [new branch] gh/angelayi/107/head -> origin/gh/angelayi/107/head 2025-08-14T21:22:30.0558501Z * [new branch] gh/angelayi/108/base -> origin/gh/angelayi/108/base 2025-08-14T21:22:30.0558894Z * [new branch] gh/angelayi/108/head -> origin/gh/angelayi/108/head 2025-08-14T21:22:30.0559576Z * [new branch] gh/angelayi/108/orig -> origin/gh/angelayi/108/orig 2025-08-14T21:22:30.0560845Z * [new branch] gh/angelayi/109/base -> origin/gh/angelayi/109/base 2025-08-14T21:22:30.0561853Z * [new branch] gh/angelayi/109/head -> origin/gh/angelayi/109/head 2025-08-14T21:22:30.0562950Z * [new branch] gh/angelayi/109/orig -> origin/gh/angelayi/109/orig 2025-08-14T21:22:30.0564101Z * [new branch] gh/angelayi/110/base -> origin/gh/angelayi/110/base 2025-08-14T21:22:30.0565243Z * [new branch] gh/angelayi/110/head -> origin/gh/angelayi/110/head 2025-08-14T21:22:30.0566296Z * [new branch] gh/angelayi/110/orig -> origin/gh/angelayi/110/orig 2025-08-14T21:22:30.0572073Z * [new branch] gh/angelayi/97/base -> origin/gh/angelayi/97/base 2025-08-14T21:22:30.0572797Z * [new branch] gh/angelayi/97/head -> origin/gh/angelayi/97/head 2025-08-14T21:22:30.0573694Z * [new branch] gh/angelayi/97/orig -> origin/gh/angelayi/97/orig 2025-08-14T21:22:30.0575386Z * [new branch] gh/ani300/1/base -> origin/gh/ani300/1/base 2025-08-14T21:22:30.0576251Z * [new branch] gh/ani300/1/head -> origin/gh/ani300/1/head 2025-08-14T21:22:30.0577156Z * [new branch] gh/ani300/1/orig -> origin/gh/ani300/1/orig 2025-08-14T21:22:30.0578870Z * [new branch] gh/anijain2305/753/base -> origin/gh/anijain2305/753/base 2025-08-14T21:22:30.0579756Z * [new branch] gh/anijain2305/753/head -> origin/gh/anijain2305/753/head 2025-08-14T21:22:30.0584831Z * [new branch] gh/anijain2305/753/orig -> origin/gh/anijain2305/753/orig 2025-08-14T21:22:30.0585246Z * [new branch] gh/anijain2305/766/base -> origin/gh/anijain2305/766/base 2025-08-14T21:22:30.0585667Z * [new branch] gh/anijain2305/766/head -> origin/gh/anijain2305/766/head 2025-08-14T21:22:30.0586081Z * [new branch] gh/anijain2305/766/orig -> origin/gh/anijain2305/766/orig 2025-08-14T21:22:30.0586507Z * [new branch] gh/anijain2305/790/base -> origin/gh/anijain2305/790/base 2025-08-14T21:22:30.0586923Z * [new branch] gh/anijain2305/790/head -> origin/gh/anijain2305/790/head 2025-08-14T21:22:30.0587457Z * [new branch] gh/anijain2305/790/orig -> origin/gh/anijain2305/790/orig 2025-08-14T21:22:30.0603027Z * [new branch] gh/anijain2305/792/base -> origin/gh/anijain2305/792/base 2025-08-14T21:22:30.0603604Z * [new branch] gh/anijain2305/792/head -> origin/gh/anijain2305/792/head 2025-08-14T21:22:30.0604060Z * [new branch] gh/anijain2305/792/orig -> origin/gh/anijain2305/792/orig 2025-08-14T21:22:30.0604501Z * [new branch] gh/anijain2305/803/base -> origin/gh/anijain2305/803/base 2025-08-14T21:22:30.0604918Z * [new branch] gh/anijain2305/803/head -> origin/gh/anijain2305/803/head 2025-08-14T21:22:30.0605343Z * [new branch] gh/anijain2305/803/orig -> origin/gh/anijain2305/803/orig 2025-08-14T21:22:30.0605771Z * [new branch] gh/anijain2305/804/base -> origin/gh/anijain2305/804/base 2025-08-14T21:22:30.0606215Z * [new branch] gh/anijain2305/804/head -> origin/gh/anijain2305/804/head 2025-08-14T21:22:30.0606631Z * [new branch] gh/anijain2305/804/orig -> origin/gh/anijain2305/804/orig 2025-08-14T21:22:30.0607954Z * [new branch] gh/anijain2305/805/base -> origin/gh/anijain2305/805/base 2025-08-14T21:22:30.0608807Z * [new branch] gh/anijain2305/805/head -> origin/gh/anijain2305/805/head 2025-08-14T21:22:30.0614180Z * [new branch] gh/anijain2305/805/orig -> origin/gh/anijain2305/805/orig 2025-08-14T21:22:30.0614775Z * [new branch] gh/anijain2305/810/base -> origin/gh/anijain2305/810/base 2025-08-14T21:22:30.0615387Z * [new branch] gh/anijain2305/810/head -> origin/gh/anijain2305/810/head 2025-08-14T21:22:30.0615841Z * [new branch] gh/anijain2305/810/orig -> origin/gh/anijain2305/810/orig 2025-08-14T21:22:30.0616598Z * [new branch] gh/anijain2305/811/base -> origin/gh/anijain2305/811/base 2025-08-14T21:22:30.0617044Z * [new branch] gh/anijain2305/811/head -> origin/gh/anijain2305/811/head 2025-08-14T21:22:30.0617635Z * [new branch] gh/anijain2305/811/orig -> origin/gh/anijain2305/811/orig 2025-08-14T21:22:30.0618067Z * [new branch] gh/anijain2305/812/base -> origin/gh/anijain2305/812/base 2025-08-14T21:22:30.0618690Z * [new branch] gh/anijain2305/812/head -> origin/gh/anijain2305/812/head 2025-08-14T21:22:30.0619643Z * [new branch] gh/anijain2305/812/orig -> origin/gh/anijain2305/812/orig 2025-08-14T21:22:30.0620938Z * [new branch] gh/anijain2305/813/base -> origin/gh/anijain2305/813/base 2025-08-14T21:22:30.0621909Z * [new branch] gh/anijain2305/813/head -> origin/gh/anijain2305/813/head 2025-08-14T21:22:30.0622832Z * [new branch] gh/anijain2305/813/orig -> origin/gh/anijain2305/813/orig 2025-08-14T21:22:30.0624414Z * [new branch] gh/anijain2305/814/base -> origin/gh/anijain2305/814/base 2025-08-14T21:22:30.0625289Z * [new branch] gh/anijain2305/814/head -> origin/gh/anijain2305/814/head 2025-08-14T21:22:30.0626234Z * [new branch] gh/anijain2305/814/orig -> origin/gh/anijain2305/814/orig 2025-08-14T21:22:30.0627508Z * [new branch] gh/anijain2305/815/base -> origin/gh/anijain2305/815/base 2025-08-14T21:22:30.0628381Z * [new branch] gh/anijain2305/815/head -> origin/gh/anijain2305/815/head 2025-08-14T21:22:30.0629281Z * [new branch] gh/anijain2305/815/orig -> origin/gh/anijain2305/815/orig 2025-08-14T21:22:30.0630601Z * [new branch] gh/anijain2305/816/base -> origin/gh/anijain2305/816/base 2025-08-14T21:22:30.0631614Z * [new branch] gh/anijain2305/816/head -> origin/gh/anijain2305/816/head 2025-08-14T21:22:30.0632814Z * [new branch] gh/anijain2305/817/base -> origin/gh/anijain2305/817/base 2025-08-14T21:22:30.0633721Z * [new branch] gh/anijain2305/817/head -> origin/gh/anijain2305/817/head 2025-08-14T21:22:30.0634617Z * [new branch] gh/anijain2305/817/orig -> origin/gh/anijain2305/817/orig 2025-08-14T21:22:30.0635916Z * [new branch] gh/anijain2305/818/base -> origin/gh/anijain2305/818/base 2025-08-14T21:22:30.0636899Z * [new branch] gh/anijain2305/818/head -> origin/gh/anijain2305/818/head 2025-08-14T21:22:30.0637810Z * [new branch] gh/anijain2305/818/orig -> origin/gh/anijain2305/818/orig 2025-08-14T21:22:30.0647348Z * [new branch] gh/anijain2305/819/base -> origin/gh/anijain2305/819/base 2025-08-14T21:22:30.0647669Z * [new branch] gh/anijain2305/819/head -> origin/gh/anijain2305/819/head 2025-08-14T21:22:30.0648143Z * [new branch] gh/anijain2305/819/orig -> origin/gh/anijain2305/819/orig 2025-08-14T21:22:30.0649935Z * [new branch] gh/anijain2305/820/base -> origin/gh/anijain2305/820/base 2025-08-14T21:22:30.0651007Z * [new branch] gh/anijain2305/820/head -> origin/gh/anijain2305/820/head 2025-08-14T21:22:30.0651964Z * [new branch] gh/anijain2305/820/orig -> origin/gh/anijain2305/820/orig 2025-08-14T21:22:30.0653268Z * [new branch] gh/anijain2305/821/base -> origin/gh/anijain2305/821/base 2025-08-14T21:22:30.0655387Z * [new branch] gh/anijain2305/821/head -> origin/gh/anijain2305/821/head 2025-08-14T21:22:30.0655626Z * [new branch] gh/anijain2305/821/orig -> origin/gh/anijain2305/821/orig 2025-08-14T21:22:30.0656477Z * [new branch] gh/anijain2305/822/base -> origin/gh/anijain2305/822/base 2025-08-14T21:22:30.0657471Z * [new branch] gh/anijain2305/822/head -> origin/gh/anijain2305/822/head 2025-08-14T21:22:30.0658604Z * [new branch] gh/anijain2305/822/orig -> origin/gh/anijain2305/822/orig 2025-08-14T21:22:30.0659627Z * [new branch] gh/anijain2305/823/base -> origin/gh/anijain2305/823/base 2025-08-14T21:22:30.0660564Z * [new branch] gh/anijain2305/823/head -> origin/gh/anijain2305/823/head 2025-08-14T21:22:30.0661486Z * [new branch] gh/anijain2305/823/orig -> origin/gh/anijain2305/823/orig 2025-08-14T21:22:30.0662885Z * [new branch] gh/anijain2305/824/base -> origin/gh/anijain2305/824/base 2025-08-14T21:22:30.0663945Z * [new branch] gh/anijain2305/824/head -> origin/gh/anijain2305/824/head 2025-08-14T21:22:30.0664884Z * [new branch] gh/anijain2305/824/orig -> origin/gh/anijain2305/824/orig 2025-08-14T21:22:30.0666298Z * [new branch] gh/anijain2305/825/base -> origin/gh/anijain2305/825/base 2025-08-14T21:22:30.0667720Z * [new branch] gh/anijain2305/825/head -> origin/gh/anijain2305/825/head 2025-08-14T21:22:30.0676237Z * [new branch] gh/anijain2305/825/orig -> origin/gh/anijain2305/825/orig 2025-08-14T21:22:30.0676433Z * [new branch] gh/anijain2305/826/base -> origin/gh/anijain2305/826/base 2025-08-14T21:22:30.0676615Z * [new branch] gh/anijain2305/826/head -> origin/gh/anijain2305/826/head 2025-08-14T21:22:30.0676788Z * [new branch] gh/anijain2305/826/orig -> origin/gh/anijain2305/826/orig 2025-08-14T21:22:30.0677558Z * [new branch] gh/anijain2305/827/base -> origin/gh/anijain2305/827/base 2025-08-14T21:22:30.0678446Z * [new branch] gh/anijain2305/827/head -> origin/gh/anijain2305/827/head 2025-08-14T21:22:30.0679396Z * [new branch] gh/anijain2305/827/orig -> origin/gh/anijain2305/827/orig 2025-08-14T21:22:30.0680701Z * [new branch] gh/anijain2305/828/base -> origin/gh/anijain2305/828/base 2025-08-14T21:22:30.0681863Z * [new branch] gh/anijain2305/828/head -> origin/gh/anijain2305/828/head 2025-08-14T21:22:30.0688442Z * [new branch] gh/anijain2305/828/orig -> origin/gh/anijain2305/828/orig 2025-08-14T21:22:30.0688627Z * [new branch] gh/anijain2305/829/base -> origin/gh/anijain2305/829/base 2025-08-14T21:22:30.0688848Z * [new branch] gh/anijain2305/829/head -> origin/gh/anijain2305/829/head 2025-08-14T21:22:30.0689144Z * [new branch] gh/anijain2305/829/orig -> origin/gh/anijain2305/829/orig 2025-08-14T21:22:30.0689392Z * [new branch] gh/anijain2305/830/base -> origin/gh/anijain2305/830/base 2025-08-14T21:22:30.0689575Z * [new branch] gh/anijain2305/830/head -> origin/gh/anijain2305/830/head 2025-08-14T21:22:30.0689754Z * [new branch] gh/anijain2305/830/orig -> origin/gh/anijain2305/830/orig 2025-08-14T21:22:30.0690555Z * [new branch] gh/anijain2305/831/base -> origin/gh/anijain2305/831/base 2025-08-14T21:22:30.0691584Z * [new branch] gh/anijain2305/831/head -> origin/gh/anijain2305/831/head 2025-08-14T21:22:30.0692433Z * [new branch] gh/anijain2305/831/orig -> origin/gh/anijain2305/831/orig 2025-08-14T21:22:30.0693701Z * [new branch] gh/anijain2305/832/base -> origin/gh/anijain2305/832/base 2025-08-14T21:22:30.0694706Z * [new branch] gh/anijain2305/832/head -> origin/gh/anijain2305/832/head 2025-08-14T21:22:30.0695590Z * [new branch] gh/anijain2305/832/orig -> origin/gh/anijain2305/832/orig 2025-08-14T21:22:30.0701254Z * [new branch] gh/anijain2305/833/base -> origin/gh/anijain2305/833/base 2025-08-14T21:22:30.0702239Z * [new branch] gh/anijain2305/833/head -> origin/gh/anijain2305/833/head 2025-08-14T21:22:30.0703292Z * [new branch] gh/anijain2305/833/orig -> origin/gh/anijain2305/833/orig 2025-08-14T21:22:30.0704579Z * [new branch] gh/anijain2305/834/base -> origin/gh/anijain2305/834/base 2025-08-14T21:22:30.0705445Z * [new branch] gh/anijain2305/834/head -> origin/gh/anijain2305/834/head 2025-08-14T21:22:30.0706428Z * [new branch] gh/anijain2305/834/orig -> origin/gh/anijain2305/834/orig 2025-08-14T21:22:30.0707702Z * [new branch] gh/anijain2305/835/base -> origin/gh/anijain2305/835/base 2025-08-14T21:22:30.0708580Z * [new branch] gh/anijain2305/835/head -> origin/gh/anijain2305/835/head 2025-08-14T21:22:30.0709597Z * [new branch] gh/anijain2305/835/orig -> origin/gh/anijain2305/835/orig 2025-08-14T21:22:30.0710808Z * [new branch] gh/anijain2305/836/base -> origin/gh/anijain2305/836/base 2025-08-14T21:22:30.0717580Z * [new branch] gh/anijain2305/836/head -> origin/gh/anijain2305/836/head 2025-08-14T21:22:30.0717824Z * [new branch] gh/anijain2305/836/orig -> origin/gh/anijain2305/836/orig 2025-08-14T21:22:30.0718071Z * [new branch] gh/anijain2305/837/base -> origin/gh/anijain2305/837/base 2025-08-14T21:22:30.0718514Z * [new branch] gh/anijain2305/837/head -> origin/gh/anijain2305/837/head 2025-08-14T21:22:30.0719503Z * [new branch] gh/anijain2305/837/orig -> origin/gh/anijain2305/837/orig 2025-08-14T21:22:30.0720709Z * [new branch] gh/anijain2305/838/base -> origin/gh/anijain2305/838/base 2025-08-14T21:22:30.0721770Z * [new branch] gh/anijain2305/838/head -> origin/gh/anijain2305/838/head 2025-08-14T21:22:30.0722708Z * [new branch] gh/anijain2305/838/orig -> origin/gh/anijain2305/838/orig 2025-08-14T21:22:30.0724009Z * [new branch] gh/anijain2305/839/base -> origin/gh/anijain2305/839/base 2025-08-14T21:22:30.0724944Z * [new branch] gh/anijain2305/839/head -> origin/gh/anijain2305/839/head 2025-08-14T21:22:30.0730284Z * [new branch] gh/anijain2305/839/orig -> origin/gh/anijain2305/839/orig 2025-08-14T21:22:30.0731610Z * [new branch] gh/anijain2305/840/base -> origin/gh/anijain2305/840/base 2025-08-14T21:22:30.0732626Z * [new branch] gh/anijain2305/840/head -> origin/gh/anijain2305/840/head 2025-08-14T21:22:30.0733600Z * [new branch] gh/anijain2305/840/orig -> origin/gh/anijain2305/840/orig 2025-08-14T21:22:30.0734972Z * [new branch] gh/anijain2305/841/base -> origin/gh/anijain2305/841/base 2025-08-14T21:22:30.0735952Z * [new branch] gh/anijain2305/841/head -> origin/gh/anijain2305/841/head 2025-08-14T21:22:30.0736887Z * [new branch] gh/anijain2305/841/orig -> origin/gh/anijain2305/841/orig 2025-08-14T21:22:30.0738131Z * [new branch] gh/anijain2305/842/base -> origin/gh/anijain2305/842/base 2025-08-14T21:22:30.0739078Z * [new branch] gh/anijain2305/842/head -> origin/gh/anijain2305/842/head 2025-08-14T21:22:30.0744371Z * [new branch] gh/anijain2305/842/orig -> origin/gh/anijain2305/842/orig 2025-08-14T21:22:30.0744550Z * [new branch] gh/anijain2305/843/base -> origin/gh/anijain2305/843/base 2025-08-14T21:22:30.0744739Z * [new branch] gh/anijain2305/843/head -> origin/gh/anijain2305/843/head 2025-08-14T21:22:30.0744915Z * [new branch] gh/anijain2305/843/orig -> origin/gh/anijain2305/843/orig 2025-08-14T21:22:30.0745103Z * [new branch] gh/anijain2305/844/base -> origin/gh/anijain2305/844/base 2025-08-14T21:22:30.0745501Z * [new branch] gh/anijain2305/844/head -> origin/gh/anijain2305/844/head 2025-08-14T21:22:30.0746469Z * [new branch] gh/anijain2305/844/orig -> origin/gh/anijain2305/844/orig 2025-08-14T21:22:30.0747763Z * [new branch] gh/anijain2305/845/base -> origin/gh/anijain2305/845/base 2025-08-14T21:22:30.0748662Z * [new branch] gh/anijain2305/845/head -> origin/gh/anijain2305/845/head 2025-08-14T21:22:30.0750192Z * [new branch] gh/anijain2305/845/orig -> origin/gh/anijain2305/845/orig 2025-08-14T21:22:30.0751419Z * [new branch] gh/anijain2305/846/base -> origin/gh/anijain2305/846/base 2025-08-14T21:22:30.0752372Z * [new branch] gh/anijain2305/846/head -> origin/gh/anijain2305/846/head 2025-08-14T21:22:30.0753274Z * [new branch] gh/anijain2305/846/orig -> origin/gh/anijain2305/846/orig 2025-08-14T21:22:30.0754719Z * [new branch] gh/anijain2305/847/base -> origin/gh/anijain2305/847/base 2025-08-14T21:22:30.0764132Z * [new branch] gh/anijain2305/847/head -> origin/gh/anijain2305/847/head 2025-08-14T21:22:30.0765179Z * [new branch] gh/anijain2305/847/orig -> origin/gh/anijain2305/847/orig 2025-08-14T21:22:30.0766773Z * [new branch] gh/anijain2305/848/base -> origin/gh/anijain2305/848/base 2025-08-14T21:22:30.0767505Z * [new branch] gh/anijain2305/848/head -> origin/gh/anijain2305/848/head 2025-08-14T21:22:30.0768455Z * [new branch] gh/anijain2305/848/orig -> origin/gh/anijain2305/848/orig 2025-08-14T21:22:30.0773290Z * [new branch] gh/anjali411/216/base -> origin/gh/anjali411/216/base 2025-08-14T21:22:30.0773522Z * [new branch] gh/anjali411/216/head -> origin/gh/anjali411/216/head 2025-08-14T21:22:30.0774686Z * [new branch] gh/anjali411/216/orig -> origin/gh/anjali411/216/orig 2025-08-14T21:22:30.0776636Z * [new branch] gh/ankitageorge/10/base -> origin/gh/ankitageorge/10/base 2025-08-14T21:22:30.0777365Z * [new branch] gh/ankitageorge/10/head -> origin/gh/ankitageorge/10/head 2025-08-14T21:22:30.0778503Z * [new branch] gh/ankitageorge/10/orig -> origin/gh/ankitageorge/10/orig 2025-08-14T21:22:30.0779686Z * [new branch] gh/ankitageorge/12/base -> origin/gh/ankitageorge/12/base 2025-08-14T21:22:30.0780648Z * [new branch] gh/ankitageorge/12/head -> origin/gh/ankitageorge/12/head 2025-08-14T21:22:30.0781641Z * [new branch] gh/ankitageorge/12/orig -> origin/gh/ankitageorge/12/orig 2025-08-14T21:22:30.0783112Z * [new branch] gh/ankitageorge/13/base -> origin/gh/ankitageorge/13/base 2025-08-14T21:22:30.0784198Z * [new branch] gh/ankitageorge/13/head -> origin/gh/ankitageorge/13/head 2025-08-14T21:22:30.0785182Z * [new branch] gh/ankitageorge/13/orig -> origin/gh/ankitageorge/13/orig 2025-08-14T21:22:30.0786590Z * [new branch] gh/ankitageorge/14/base -> origin/gh/ankitageorge/14/base 2025-08-14T21:22:30.0787446Z * [new branch] gh/ankitageorge/14/head -> origin/gh/ankitageorge/14/head 2025-08-14T21:22:30.0788593Z * [new branch] gh/ankitageorge/14/orig -> origin/gh/ankitageorge/14/orig 2025-08-14T21:22:30.0789940Z * [new branch] gh/ankitageorge/15/base -> origin/gh/ankitageorge/15/base 2025-08-14T21:22:30.0790865Z * [new branch] gh/ankitageorge/15/head -> origin/gh/ankitageorge/15/head 2025-08-14T21:22:30.0791828Z * [new branch] gh/ankitageorge/15/orig -> origin/gh/ankitageorge/15/orig 2025-08-14T21:22:30.0793165Z * [new branch] gh/ankitageorge/16/base -> origin/gh/ankitageorge/16/base 2025-08-14T21:22:30.0794150Z * [new branch] gh/ankitageorge/16/head -> origin/gh/ankitageorge/16/head 2025-08-14T21:22:30.0795176Z * [new branch] gh/ankitageorge/16/orig -> origin/gh/ankitageorge/16/orig 2025-08-14T21:22:30.0796680Z * [new branch] gh/ankitageorge/17/base -> origin/gh/ankitageorge/17/base 2025-08-14T21:22:30.0801803Z * [new branch] gh/ankitageorge/17/head -> origin/gh/ankitageorge/17/head 2025-08-14T21:22:30.0806400Z * [new branch] gh/ankitageorge/17/orig -> origin/gh/ankitageorge/17/orig 2025-08-14T21:22:30.0806642Z * [new branch] gh/ankitageorge/18/base -> origin/gh/ankitageorge/18/base 2025-08-14T21:22:30.0806830Z * [new branch] gh/ankitageorge/18/head -> origin/gh/ankitageorge/18/head 2025-08-14T21:22:30.0807010Z * [new branch] gh/ankitageorge/18/orig -> origin/gh/ankitageorge/18/orig 2025-08-14T21:22:30.0807824Z * [new branch] gh/ankitageorge/19/base -> origin/gh/ankitageorge/19/base 2025-08-14T21:22:30.0808823Z * [new branch] gh/ankitageorge/19/head -> origin/gh/ankitageorge/19/head 2025-08-14T21:22:30.0809795Z * [new branch] gh/ankitageorge/19/orig -> origin/gh/ankitageorge/19/orig 2025-08-14T21:22:30.0811151Z * [new branch] gh/ankitageorge/20/base -> origin/gh/ankitageorge/20/base 2025-08-14T21:22:30.0812257Z * [new branch] gh/ankitageorge/20/head -> origin/gh/ankitageorge/20/head 2025-08-14T21:22:30.0814644Z * [new branch] gh/ankitageorge/20/orig -> origin/gh/ankitageorge/20/orig 2025-08-14T21:22:30.0814834Z * [new branch] gh/ankitageorge/21/base -> origin/gh/ankitageorge/21/base 2025-08-14T21:22:30.0815673Z * [new branch] gh/ankitageorge/21/head -> origin/gh/ankitageorge/21/head 2025-08-14T21:22:30.0816626Z * [new branch] gh/ankitageorge/21/orig -> origin/gh/ankitageorge/21/orig 2025-08-14T21:22:30.0818353Z * [new branch] gh/anshul-si/1/base -> origin/gh/anshul-si/1/base 2025-08-14T21:22:30.0819292Z * [new branch] gh/anshul-si/1/head -> origin/gh/anshul-si/1/head 2025-08-14T21:22:30.0820848Z * [new branch] gh/anshul-si/10/base -> origin/gh/anshul-si/10/base 2025-08-14T21:22:30.0821778Z * [new branch] gh/anshul-si/10/head -> origin/gh/anshul-si/10/head 2025-08-14T21:22:30.0822741Z * [new branch] gh/anshul-si/10/orig -> origin/gh/anshul-si/10/orig 2025-08-14T21:22:30.0824210Z * [new branch] gh/anshul-si/11/base -> origin/gh/anshul-si/11/base 2025-08-14T21:22:30.0825147Z * [new branch] gh/anshul-si/11/head -> origin/gh/anshul-si/11/head 2025-08-14T21:22:30.0826072Z * [new branch] gh/anshul-si/11/orig -> origin/gh/anshul-si/11/orig 2025-08-14T21:22:30.0831784Z * [new branch] gh/anshul-si/12/base -> origin/gh/anshul-si/12/base 2025-08-14T21:22:30.0835599Z * [new branch] gh/anshul-si/12/head -> origin/gh/anshul-si/12/head 2025-08-14T21:22:30.0835817Z * [new branch] gh/anshul-si/12/orig -> origin/gh/anshul-si/12/orig 2025-08-14T21:22:30.0836032Z * [new branch] gh/anshul-si/13/base -> origin/gh/anshul-si/13/base 2025-08-14T21:22:30.0836243Z * [new branch] gh/anshul-si/13/head -> origin/gh/anshul-si/13/head 2025-08-14T21:22:30.0836871Z * [new branch] gh/anshul-si/13/orig -> origin/gh/anshul-si/13/orig 2025-08-14T21:22:30.0838170Z * [new branch] gh/anshul-si/14/base -> origin/gh/anshul-si/14/base 2025-08-14T21:22:30.0839099Z * [new branch] gh/anshul-si/14/head -> origin/gh/anshul-si/14/head 2025-08-14T21:22:30.0839999Z * [new branch] gh/anshul-si/14/orig -> origin/gh/anshul-si/14/orig 2025-08-14T21:22:30.0841241Z * [new branch] gh/anshul-si/15/base -> origin/gh/anshul-si/15/base 2025-08-14T21:22:30.0845827Z * [new branch] gh/anshul-si/15/head -> origin/gh/anshul-si/15/head 2025-08-14T21:22:30.0845990Z * [new branch] gh/anshul-si/15/orig -> origin/gh/anshul-si/15/orig 2025-08-14T21:22:30.0846156Z * [new branch] gh/anshul-si/16/base -> origin/gh/anshul-si/16/base 2025-08-14T21:22:30.0846315Z * [new branch] gh/anshul-si/16/head -> origin/gh/anshul-si/16/head 2025-08-14T21:22:30.0846680Z * [new branch] gh/anshul-si/16/orig -> origin/gh/anshul-si/16/orig 2025-08-14T21:22:30.0848016Z * [new branch] gh/anshul-si/17/base -> origin/gh/anshul-si/17/base 2025-08-14T21:22:30.0849263Z * [new branch] gh/anshul-si/17/head -> origin/gh/anshul-si/17/head 2025-08-14T21:22:30.0850364Z * [new branch] gh/anshul-si/17/orig -> origin/gh/anshul-si/17/orig 2025-08-14T21:22:30.0851842Z * [new branch] gh/anshul-si/18/base -> origin/gh/anshul-si/18/base 2025-08-14T21:22:30.0852824Z * [new branch] gh/anshul-si/18/head -> origin/gh/anshul-si/18/head 2025-08-14T21:22:30.0853753Z * [new branch] gh/anshul-si/18/orig -> origin/gh/anshul-si/18/orig 2025-08-14T21:22:30.0855120Z * [new branch] gh/anshul-si/19/base -> origin/gh/anshul-si/19/base 2025-08-14T21:22:30.0856250Z * [new branch] gh/anshul-si/19/head -> origin/gh/anshul-si/19/head 2025-08-14T21:22:30.0861407Z * [new branch] gh/anshul-si/19/orig -> origin/gh/anshul-si/19/orig 2025-08-14T21:22:30.0862657Z * [new branch] gh/anshul-si/2/base -> origin/gh/anshul-si/2/base 2025-08-14T21:22:30.0863592Z * [new branch] gh/anshul-si/2/head -> origin/gh/anshul-si/2/head 2025-08-14T21:22:30.0864863Z * [new branch] gh/anshul-si/20/base -> origin/gh/anshul-si/20/base 2025-08-14T21:22:30.0865792Z * [new branch] gh/anshul-si/20/head -> origin/gh/anshul-si/20/head 2025-08-14T21:22:30.0866708Z * [new branch] gh/anshul-si/20/orig -> origin/gh/anshul-si/20/orig 2025-08-14T21:22:30.0867971Z * [new branch] gh/anshul-si/21/base -> origin/gh/anshul-si/21/base 2025-08-14T21:22:30.0868869Z * [new branch] gh/anshul-si/21/head -> origin/gh/anshul-si/21/head 2025-08-14T21:22:30.0869787Z * [new branch] gh/anshul-si/21/orig -> origin/gh/anshul-si/21/orig 2025-08-14T21:22:30.0874759Z * [new branch] gh/anshul-si/22/base -> origin/gh/anshul-si/22/base 2025-08-14T21:22:30.0875019Z * [new branch] gh/anshul-si/22/head -> origin/gh/anshul-si/22/head 2025-08-14T21:22:30.0875189Z * [new branch] gh/anshul-si/22/orig -> origin/gh/anshul-si/22/orig 2025-08-14T21:22:30.0875350Z * [new branch] gh/anshul-si/23/base -> origin/gh/anshul-si/23/base 2025-08-14T21:22:30.0875851Z * [new branch] gh/anshul-si/23/head -> origin/gh/anshul-si/23/head 2025-08-14T21:22:30.0876796Z * [new branch] gh/anshul-si/23/orig -> origin/gh/anshul-si/23/orig 2025-08-14T21:22:30.0878120Z * [new branch] gh/anshul-si/24/base -> origin/gh/anshul-si/24/base 2025-08-14T21:22:30.0879133Z * [new branch] gh/anshul-si/24/head -> origin/gh/anshul-si/24/head 2025-08-14T21:22:30.0880059Z * [new branch] gh/anshul-si/24/orig -> origin/gh/anshul-si/24/orig 2025-08-14T21:22:30.0881537Z * [new branch] gh/anshul-si/25/base -> origin/gh/anshul-si/25/base 2025-08-14T21:22:30.0882424Z * [new branch] gh/anshul-si/25/head -> origin/gh/anshul-si/25/head 2025-08-14T21:22:30.0883360Z * [new branch] gh/anshul-si/25/orig -> origin/gh/anshul-si/25/orig 2025-08-14T21:22:30.0885059Z * [new branch] gh/anshul-si/26/base -> origin/gh/anshul-si/26/base 2025-08-14T21:22:30.0890251Z * [new branch] gh/anshul-si/26/head -> origin/gh/anshul-si/26/head 2025-08-14T21:22:30.0891248Z * [new branch] gh/anshul-si/26/orig -> origin/gh/anshul-si/26/orig 2025-08-14T21:22:30.0892560Z * [new branch] gh/anshul-si/27/base -> origin/gh/anshul-si/27/base 2025-08-14T21:22:30.0893561Z * [new branch] gh/anshul-si/27/head -> origin/gh/anshul-si/27/head 2025-08-14T21:22:30.0894526Z * [new branch] gh/anshul-si/27/orig -> origin/gh/anshul-si/27/orig 2025-08-14T21:22:30.0895849Z * [new branch] gh/anshul-si/3/base -> origin/gh/anshul-si/3/base 2025-08-14T21:22:30.0896682Z * [new branch] gh/anshul-si/3/head -> origin/gh/anshul-si/3/head 2025-08-14T21:22:30.0897879Z * [new branch] gh/anshul-si/4/base -> origin/gh/anshul-si/4/base 2025-08-14T21:22:30.0898747Z * [new branch] gh/anshul-si/4/head -> origin/gh/anshul-si/4/head 2025-08-14T21:22:30.0903850Z * [new branch] gh/anshul-si/5/base -> origin/gh/anshul-si/5/base 2025-08-14T21:22:30.0904061Z * [new branch] gh/anshul-si/5/head -> origin/gh/anshul-si/5/head 2025-08-14T21:22:30.0904263Z * [new branch] gh/anshul-si/6/base -> origin/gh/anshul-si/6/base 2025-08-14T21:22:30.0904443Z * [new branch] gh/anshul-si/6/head -> origin/gh/anshul-si/6/head 2025-08-14T21:22:30.0904612Z * [new branch] gh/anshul-si/6/orig -> origin/gh/anshul-si/6/orig 2025-08-14T21:22:30.0905926Z * [new branch] gh/anshul-si/7/base -> origin/gh/anshul-si/7/base 2025-08-14T21:22:30.0906998Z * [new branch] gh/anshul-si/7/head -> origin/gh/anshul-si/7/head 2025-08-14T21:22:30.0907925Z * [new branch] gh/anshul-si/7/orig -> origin/gh/anshul-si/7/orig 2025-08-14T21:22:30.0909201Z * [new branch] gh/anshul-si/8/base -> origin/gh/anshul-si/8/base 2025-08-14T21:22:30.0910195Z * [new branch] gh/anshul-si/8/head -> origin/gh/anshul-si/8/head 2025-08-14T21:22:30.0911136Z * [new branch] gh/anshul-si/8/orig -> origin/gh/anshul-si/8/orig 2025-08-14T21:22:30.0912522Z * [new branch] gh/anshul-si/9/base -> origin/gh/anshul-si/9/base 2025-08-14T21:22:30.0915673Z * [new branch] gh/anshul-si/9/head -> origin/gh/anshul-si/9/head 2025-08-14T21:22:30.0923502Z * [new branch] gh/anshul-si/9/orig -> origin/gh/anshul-si/9/orig 2025-08-14T21:22:30.0924985Z * [new branch] gh/aorenste/132/base -> origin/gh/aorenste/132/base 2025-08-14T21:22:30.0925971Z * [new branch] gh/aorenste/132/head -> origin/gh/aorenste/132/head 2025-08-14T21:22:30.0927338Z * [new branch] gh/aorenste/235/base -> origin/gh/aorenste/235/base 2025-08-14T21:22:30.0936874Z * [new branch] gh/aorenste/235/head -> origin/gh/aorenste/235/head 2025-08-14T21:22:30.0937097Z * [new branch] gh/aorenste/235/orig -> origin/gh/aorenste/235/orig 2025-08-14T21:22:30.0937321Z * [new branch] gh/aorenste/236/base -> origin/gh/aorenste/236/base 2025-08-14T21:22:30.0937532Z * [new branch] gh/aorenste/236/head -> origin/gh/aorenste/236/head 2025-08-14T21:22:30.0937736Z * [new branch] gh/aorenste/236/orig -> origin/gh/aorenste/236/orig 2025-08-14T21:22:30.0937960Z * [new branch] gh/aorenste/237/base -> origin/gh/aorenste/237/base 2025-08-14T21:22:30.0938213Z * [new branch] gh/aorenste/237/head -> origin/gh/aorenste/237/head 2025-08-14T21:22:30.0939179Z * [new branch] gh/aorenste/237/orig -> origin/gh/aorenste/237/orig 2025-08-14T21:22:30.0940425Z * [new branch] gh/aorenste/238/base -> origin/gh/aorenste/238/base 2025-08-14T21:22:30.0941329Z * [new branch] gh/aorenste/238/head -> origin/gh/aorenste/238/head 2025-08-14T21:22:30.0942262Z * [new branch] gh/aorenste/238/orig -> origin/gh/aorenste/238/orig 2025-08-14T21:22:30.0944222Z * [new branch] gh/bdhirsh/650/base -> origin/gh/bdhirsh/650/base 2025-08-14T21:22:30.0945323Z * [new branch] gh/bdhirsh/650/head -> origin/gh/bdhirsh/650/head 2025-08-14T21:22:30.0946257Z * [new branch] gh/bdhirsh/650/orig -> origin/gh/bdhirsh/650/orig 2025-08-14T21:22:30.0947695Z * [new branch] gh/bdhirsh/656/base -> origin/gh/bdhirsh/656/base 2025-08-14T21:22:30.0948548Z * [new branch] gh/bdhirsh/656/head -> origin/gh/bdhirsh/656/head 2025-08-14T21:22:30.0950242Z * [new branch] gh/bdhirsh/657/base -> origin/gh/bdhirsh/657/base 2025-08-14T21:22:30.0951234Z * [new branch] gh/bdhirsh/657/head -> origin/gh/bdhirsh/657/head 2025-08-14T21:22:30.0952458Z * [new branch] gh/bdhirsh/659/base -> origin/gh/bdhirsh/659/base 2025-08-14T21:22:30.0953402Z * [new branch] gh/bdhirsh/659/head -> origin/gh/bdhirsh/659/head 2025-08-14T21:22:30.0954326Z * [new branch] gh/bdhirsh/659/orig -> origin/gh/bdhirsh/659/orig 2025-08-14T21:22:30.0956001Z * [new branch] gh/bdhirsh/663/base -> origin/gh/bdhirsh/663/base 2025-08-14T21:22:30.0956642Z * [new branch] gh/bdhirsh/663/head -> origin/gh/bdhirsh/663/head 2025-08-14T21:22:30.0965590Z * [new branch] gh/bdhirsh/663/orig -> origin/gh/bdhirsh/663/orig 2025-08-14T21:22:30.0965808Z * [new branch] gh/bdhirsh/665/base -> origin/gh/bdhirsh/665/base 2025-08-14T21:22:30.0966012Z * [new branch] gh/bdhirsh/665/head -> origin/gh/bdhirsh/665/head 2025-08-14T21:22:30.0966230Z * [new branch] gh/bdhirsh/665/orig -> origin/gh/bdhirsh/665/orig 2025-08-14T21:22:30.0966434Z * [new branch] gh/bdhirsh/666/base -> origin/gh/bdhirsh/666/base 2025-08-14T21:22:30.0966905Z * [new branch] gh/bdhirsh/666/head -> origin/gh/bdhirsh/666/head 2025-08-14T21:22:30.0967798Z * [new branch] gh/bdhirsh/666/orig -> origin/gh/bdhirsh/666/orig 2025-08-14T21:22:30.0969360Z * [new branch] gh/benjaminglass1/79/base -> origin/gh/benjaminglass1/79/base 2025-08-14T21:22:30.0970365Z * [new branch] gh/benjaminglass1/79/head -> origin/gh/benjaminglass1/79/head 2025-08-14T21:22:30.0971216Z * [new branch] gh/benjaminglass1/79/orig -> origin/gh/benjaminglass1/79/orig 2025-08-14T21:22:30.0978335Z * [new branch] gh/benjaminglass1/86/base -> origin/gh/benjaminglass1/86/base 2025-08-14T21:22:30.0978578Z * [new branch] gh/benjaminglass1/86/head -> origin/gh/benjaminglass1/86/head 2025-08-14T21:22:30.0978813Z * [new branch] gh/benjaminglass1/86/orig -> origin/gh/benjaminglass1/86/orig 2025-08-14T21:22:30.0979056Z * [new branch] gh/benjaminglass1/89/base -> origin/gh/benjaminglass1/89/base 2025-08-14T21:22:30.0979289Z * [new branch] gh/benjaminglass1/89/head -> origin/gh/benjaminglass1/89/head 2025-08-14T21:22:30.0979539Z * [new branch] gh/benjaminglass1/89/orig -> origin/gh/benjaminglass1/89/orig 2025-08-14T21:22:30.0979777Z * [new branch] gh/benjaminglass1/91/base -> origin/gh/benjaminglass1/91/base 2025-08-14T21:22:30.0980346Z * [new branch] gh/benjaminglass1/91/head -> origin/gh/benjaminglass1/91/head 2025-08-14T21:22:30.0981316Z * [new branch] gh/benjaminglass1/91/orig -> origin/gh/benjaminglass1/91/orig 2025-08-14T21:22:30.0982599Z * [new branch] gh/benjaminglass1/93/base -> origin/gh/benjaminglass1/93/base 2025-08-14T21:22:30.0983526Z * [new branch] gh/benjaminglass1/93/head -> origin/gh/benjaminglass1/93/head 2025-08-14T21:22:30.0984407Z * [new branch] gh/benjaminglass1/93/orig -> origin/gh/benjaminglass1/93/orig 2025-08-14T21:22:30.0985785Z * [new branch] gh/benjaminglass1/94/base -> origin/gh/benjaminglass1/94/base 2025-08-14T21:22:30.0990440Z * [new branch] gh/benjaminglass1/94/head -> origin/gh/benjaminglass1/94/head 2025-08-14T21:22:30.0992604Z * [new branch] gh/benjaminglass1/94/orig -> origin/gh/benjaminglass1/94/orig 2025-08-14T21:22:30.0993941Z * [new branch] gh/benjaminglass1/95/base -> origin/gh/benjaminglass1/95/base 2025-08-14T21:22:30.0994801Z * [new branch] gh/benjaminglass1/95/head -> origin/gh/benjaminglass1/95/head 2025-08-14T21:22:30.0995740Z * [new branch] gh/benjaminglass1/95/orig -> origin/gh/benjaminglass1/95/orig 2025-08-14T21:22:30.0997038Z * [new branch] gh/benjaminglass1/96/base -> origin/gh/benjaminglass1/96/base 2025-08-14T21:22:30.0997945Z * [new branch] gh/benjaminglass1/96/head -> origin/gh/benjaminglass1/96/head 2025-08-14T21:22:30.0998913Z * [new branch] gh/benjaminglass1/96/orig -> origin/gh/benjaminglass1/96/orig 2025-08-14T21:22:30.1000272Z * [new branch] gh/benjaminglass1/97/base -> origin/gh/benjaminglass1/97/base 2025-08-14T21:22:30.1005226Z * [new branch] gh/benjaminglass1/97/head -> origin/gh/benjaminglass1/97/head 2025-08-14T21:22:30.1005457Z * [new branch] gh/benjaminglass1/97/orig -> origin/gh/benjaminglass1/97/orig 2025-08-14T21:22:30.1005689Z * [new branch] gh/benjaminglass1/98/base -> origin/gh/benjaminglass1/98/base 2025-08-14T21:22:30.1005908Z * [new branch] gh/benjaminglass1/98/head -> origin/gh/benjaminglass1/98/head 2025-08-14T21:22:30.1006133Z * [new branch] gh/benjaminglass1/98/orig -> origin/gh/benjaminglass1/98/orig 2025-08-14T21:22:30.1008053Z * [new branch] gh/bobrenjc93/478/base -> origin/gh/bobrenjc93/478/base 2025-08-14T21:22:30.1008300Z * [new branch] gh/bobrenjc93/478/head -> origin/gh/bobrenjc93/478/head 2025-08-14T21:22:30.1008998Z * [new branch] gh/bobrenjc93/478/orig -> origin/gh/bobrenjc93/478/orig 2025-08-14T21:22:30.1010290Z * [new branch] gh/bobrenjc93/514/base -> origin/gh/bobrenjc93/514/base 2025-08-14T21:22:30.1011195Z * [new branch] gh/bobrenjc93/514/head -> origin/gh/bobrenjc93/514/head 2025-08-14T21:22:30.1012144Z * [new branch] gh/bobrenjc93/514/orig -> origin/gh/bobrenjc93/514/orig 2025-08-14T21:22:30.1013369Z * [new branch] gh/bobrenjc93/521/base -> origin/gh/bobrenjc93/521/base 2025-08-14T21:22:30.1014246Z * [new branch] gh/bobrenjc93/521/head -> origin/gh/bobrenjc93/521/head 2025-08-14T21:22:30.1015471Z * [new branch] gh/bobrenjc93/521/orig -> origin/gh/bobrenjc93/521/orig 2025-08-14T21:22:30.1023739Z * [new branch] gh/bobrenjc93/522/base -> origin/gh/bobrenjc93/522/base 2025-08-14T21:22:30.1023993Z * [new branch] gh/bobrenjc93/522/head -> origin/gh/bobrenjc93/522/head 2025-08-14T21:22:30.1024222Z * [new branch] gh/bobrenjc93/522/orig -> origin/gh/bobrenjc93/522/orig 2025-08-14T21:22:30.1024410Z * [new branch] gh/bobrenjc93/525/base -> origin/gh/bobrenjc93/525/base 2025-08-14T21:22:30.1024905Z * [new branch] gh/bobrenjc93/525/head -> origin/gh/bobrenjc93/525/head 2025-08-14T21:22:30.1025836Z * [new branch] gh/bobrenjc93/525/orig -> origin/gh/bobrenjc93/525/orig 2025-08-14T21:22:30.1027059Z * [new branch] gh/bobrenjc93/526/base -> origin/gh/bobrenjc93/526/base 2025-08-14T21:22:30.1027969Z * [new branch] gh/bobrenjc93/526/head -> origin/gh/bobrenjc93/526/head 2025-08-14T21:22:30.1028858Z * [new branch] gh/bobrenjc93/526/orig -> origin/gh/bobrenjc93/526/orig 2025-08-14T21:22:30.1034235Z * [new branch] gh/bobrenjc93/527/base -> origin/gh/bobrenjc93/527/base 2025-08-14T21:22:30.1034406Z * [new branch] gh/bobrenjc93/527/head -> origin/gh/bobrenjc93/527/head 2025-08-14T21:22:30.1034581Z * [new branch] gh/bobrenjc93/527/orig -> origin/gh/bobrenjc93/527/orig 2025-08-14T21:22:30.1034750Z * [new branch] gh/bobrenjc93/528/base -> origin/gh/bobrenjc93/528/base 2025-08-14T21:22:30.1034985Z * [new branch] gh/bobrenjc93/528/head -> origin/gh/bobrenjc93/528/head 2025-08-14T21:22:30.1035180Z * [new branch] gh/bobrenjc93/528/orig -> origin/gh/bobrenjc93/528/orig 2025-08-14T21:22:30.1036450Z * [new branch] gh/bobrenjc93/529/base -> origin/gh/bobrenjc93/529/base 2025-08-14T21:22:30.1037377Z * [new branch] gh/bobrenjc93/529/head -> origin/gh/bobrenjc93/529/head 2025-08-14T21:22:30.1038268Z * [new branch] gh/bobrenjc93/529/orig -> origin/gh/bobrenjc93/529/orig 2025-08-14T21:22:30.1039502Z * [new branch] gh/bobrenjc93/534/base -> origin/gh/bobrenjc93/534/base 2025-08-14T21:22:30.1040427Z * [new branch] gh/bobrenjc93/534/head -> origin/gh/bobrenjc93/534/head 2025-08-14T21:22:30.1041493Z * [new branch] gh/bobrenjc93/534/orig -> origin/gh/bobrenjc93/534/orig 2025-08-14T21:22:30.1042800Z * [new branch] gh/bobrenjc93/535/base -> origin/gh/bobrenjc93/535/base 2025-08-14T21:22:30.1043717Z * [new branch] gh/bobrenjc93/535/head -> origin/gh/bobrenjc93/535/head 2025-08-14T21:22:30.1044793Z * [new branch] gh/bobrenjc93/535/orig -> origin/gh/bobrenjc93/535/orig 2025-08-14T21:22:30.1054980Z * [new branch] gh/bobrenjc93/536/base -> origin/gh/bobrenjc93/536/base 2025-08-14T21:22:30.1055948Z * [new branch] gh/bobrenjc93/536/head -> origin/gh/bobrenjc93/536/head 2025-08-14T21:22:30.1057067Z * [new branch] gh/bobrenjc93/536/orig -> origin/gh/bobrenjc93/536/orig 2025-08-14T21:22:30.1058191Z * [new branch] gh/bobrenjc93/537/base -> origin/gh/bobrenjc93/537/base 2025-08-14T21:22:30.1063264Z * [new branch] gh/bobrenjc93/537/head -> origin/gh/bobrenjc93/537/head 2025-08-14T21:22:30.1063438Z * [new branch] gh/bobrenjc93/537/orig -> origin/gh/bobrenjc93/537/orig 2025-08-14T21:22:30.1063622Z * [new branch] gh/bobrenjc93/538/base -> origin/gh/bobrenjc93/538/base 2025-08-14T21:22:30.1063885Z * [new branch] gh/bobrenjc93/538/head -> origin/gh/bobrenjc93/538/head 2025-08-14T21:22:30.1064061Z * [new branch] gh/bobrenjc93/538/orig -> origin/gh/bobrenjc93/538/orig 2025-08-14T21:22:30.1064942Z * [new branch] gh/bobrenjc93/539/base -> origin/gh/bobrenjc93/539/base 2025-08-14T21:22:30.1065899Z * [new branch] gh/bobrenjc93/539/head -> origin/gh/bobrenjc93/539/head 2025-08-14T21:22:30.1066919Z * [new branch] gh/bobrenjc93/539/orig -> origin/gh/bobrenjc93/539/orig 2025-08-14T21:22:30.1068187Z * [new branch] gh/bobrenjc93/540/base -> origin/gh/bobrenjc93/540/base 2025-08-14T21:22:30.1069163Z * [new branch] gh/bobrenjc93/540/head -> origin/gh/bobrenjc93/540/head 2025-08-14T21:22:30.1070110Z * [new branch] gh/bobrenjc93/540/orig -> origin/gh/bobrenjc93/540/orig 2025-08-14T21:22:30.1072116Z * [new branch] gh/bobrenjc93/541/base -> origin/gh/bobrenjc93/541/base 2025-08-14T21:22:30.1073110Z * [new branch] gh/bobrenjc93/541/head -> origin/gh/bobrenjc93/541/head 2025-08-14T21:22:30.1074225Z * [new branch] gh/bobrenjc93/541/orig -> origin/gh/bobrenjc93/541/orig 2025-08-14T21:22:30.1075457Z * [new branch] gh/bobrenjc93/542/base -> origin/gh/bobrenjc93/542/base 2025-08-14T21:22:30.1076398Z * [new branch] gh/bobrenjc93/542/head -> origin/gh/bobrenjc93/542/head 2025-08-14T21:22:30.1077345Z * [new branch] gh/bobrenjc93/542/orig -> origin/gh/bobrenjc93/542/orig 2025-08-14T21:22:30.1078625Z * [new branch] gh/bobrenjc93/543/base -> origin/gh/bobrenjc93/543/base 2025-08-14T21:22:30.1079529Z * [new branch] gh/bobrenjc93/543/head -> origin/gh/bobrenjc93/543/head 2025-08-14T21:22:30.1080508Z * [new branch] gh/bobrenjc93/543/orig -> origin/gh/bobrenjc93/543/orig 2025-08-14T21:22:30.1081893Z * [new branch] gh/bobrenjc93/544/base -> origin/gh/bobrenjc93/544/base 2025-08-14T21:22:30.1082783Z * [new branch] gh/bobrenjc93/544/head -> origin/gh/bobrenjc93/544/head 2025-08-14T21:22:30.1083675Z * [new branch] gh/bobrenjc93/544/orig -> origin/gh/bobrenjc93/544/orig 2025-08-14T21:22:30.1084883Z * [new branch] gh/bobrenjc93/545/base -> origin/gh/bobrenjc93/545/base 2025-08-14T21:22:30.1086195Z * [new branch] gh/bobrenjc93/545/head -> origin/gh/bobrenjc93/545/head 2025-08-14T21:22:30.1086825Z * [new branch] gh/bobrenjc93/545/orig -> origin/gh/bobrenjc93/545/orig 2025-08-14T21:22:30.1096451Z * [new branch] gh/bobrenjc93/546/base -> origin/gh/bobrenjc93/546/base 2025-08-14T21:22:30.1096664Z * [new branch] gh/bobrenjc93/546/head -> origin/gh/bobrenjc93/546/head 2025-08-14T21:22:30.1096898Z * [new branch] gh/bobrenjc93/546/orig -> origin/gh/bobrenjc93/546/orig 2025-08-14T21:22:30.1098860Z * [new branch] gh/bobrenjc93/547/base -> origin/gh/bobrenjc93/547/base 2025-08-14T21:22:30.1099845Z * [new branch] gh/bobrenjc93/547/head -> origin/gh/bobrenjc93/547/head 2025-08-14T21:22:30.1100839Z * [new branch] gh/bobrenjc93/547/orig -> origin/gh/bobrenjc93/547/orig 2025-08-14T21:22:30.1102080Z * [new branch] gh/bobrenjc93/548/base -> origin/gh/bobrenjc93/548/base 2025-08-14T21:22:30.1104932Z * [new branch] gh/bobrenjc93/548/head -> origin/gh/bobrenjc93/548/head 2025-08-14T21:22:30.1105167Z * [new branch] gh/bobrenjc93/548/orig -> origin/gh/bobrenjc93/548/orig 2025-08-14T21:22:30.1105395Z * [new branch] gh/bobrenjc93/549/base -> origin/gh/bobrenjc93/549/base 2025-08-14T21:22:30.1106383Z * [new branch] gh/bobrenjc93/549/head -> origin/gh/bobrenjc93/549/head 2025-08-14T21:22:30.1107392Z * [new branch] gh/bobrenjc93/549/orig -> origin/gh/bobrenjc93/549/orig 2025-08-14T21:22:30.1108927Z * [new branch] gh/briancoutinho/2/base -> origin/gh/briancoutinho/2/base 2025-08-14T21:22:30.1109858Z * [new branch] gh/briancoutinho/2/head -> origin/gh/briancoutinho/2/head 2025-08-14T21:22:30.1111399Z * [new branch] gh/c00w/23/base -> origin/gh/c00w/23/base 2025-08-14T21:22:30.1112478Z * [new branch] gh/c00w/23/head -> origin/gh/c00w/23/head 2025-08-14T21:22:30.1113984Z * [new branch] gh/c00w/38/base -> origin/gh/c00w/38/base 2025-08-14T21:22:30.1114860Z * [new branch] gh/c00w/38/head -> origin/gh/c00w/38/head 2025-08-14T21:22:30.1115777Z * [new branch] gh/c00w/38/orig -> origin/gh/c00w/38/orig 2025-08-14T21:22:30.1117155Z * [new branch] gh/c00w/48/base -> origin/gh/c00w/48/base 2025-08-14T21:22:30.1122539Z * [new branch] gh/c00w/48/head -> origin/gh/c00w/48/head 2025-08-14T21:22:30.1123394Z * [new branch] gh/c00w/48/orig -> origin/gh/c00w/48/orig 2025-08-14T21:22:30.1124831Z * [new branch] gh/c00w/50/base -> origin/gh/c00w/50/base 2025-08-14T21:22:30.1125854Z * [new branch] gh/c00w/50/head -> origin/gh/c00w/50/head 2025-08-14T21:22:30.1126830Z * [new branch] gh/c00w/50/orig -> origin/gh/c00w/50/orig 2025-08-14T21:22:30.1128397Z * [new branch] gh/c00w/51/base -> origin/gh/c00w/51/base 2025-08-14T21:22:30.1129495Z * [new branch] gh/c00w/51/head -> origin/gh/c00w/51/head 2025-08-14T21:22:30.1130625Z * [new branch] gh/c00w/51/orig -> origin/gh/c00w/51/orig 2025-08-14T21:22:30.1137690Z * [new branch] gh/c00w/52/base -> origin/gh/c00w/52/base 2025-08-14T21:22:30.1137952Z * [new branch] gh/c00w/52/head -> origin/gh/c00w/52/head 2025-08-14T21:22:30.1138138Z * [new branch] gh/c00w/52/orig -> origin/gh/c00w/52/orig 2025-08-14T21:22:30.1138321Z * [new branch] gh/c00w/53/base -> origin/gh/c00w/53/base 2025-08-14T21:22:30.1138512Z * [new branch] gh/c00w/53/head -> origin/gh/c00w/53/head 2025-08-14T21:22:30.1138696Z * [new branch] gh/c00w/53/orig -> origin/gh/c00w/53/orig 2025-08-14T21:22:30.1138890Z * [new branch] gh/c00w/54/base -> origin/gh/c00w/54/base 2025-08-14T21:22:30.1139263Z * [new branch] gh/c00w/54/head -> origin/gh/c00w/54/head 2025-08-14T21:22:30.1140275Z * [new branch] gh/c00w/54/orig -> origin/gh/c00w/54/orig 2025-08-14T21:22:30.1141698Z * [new branch] gh/chenmillie/1/base -> origin/gh/chenmillie/1/base 2025-08-14T21:22:30.1142852Z * [new branch] gh/chenmillie/1/head -> origin/gh/chenmillie/1/head 2025-08-14T21:22:30.1143868Z * [new branch] gh/chenmillie/1/orig -> origin/gh/chenmillie/1/orig 2025-08-14T21:22:30.1145377Z * [new branch] gh/clee2000/1/base -> origin/gh/clee2000/1/base 2025-08-14T21:22:30.1154312Z * [new branch] gh/clee2000/1/head -> origin/gh/clee2000/1/head 2025-08-14T21:22:30.1154520Z * [new branch] gh/clee2000/1/orig -> origin/gh/clee2000/1/orig 2025-08-14T21:22:30.1154757Z * [new branch] gh/coconutruben/1/base -> origin/gh/coconutruben/1/base 2025-08-14T21:22:30.1155691Z * [new branch] gh/coconutruben/1/head -> origin/gh/coconutruben/1/head 2025-08-14T21:22:30.1157124Z * [new branch] gh/coconutruben/11/base -> origin/gh/coconutruben/11/base 2025-08-14T21:22:30.1158081Z * [new branch] gh/coconutruben/11/head -> origin/gh/coconutruben/11/head 2025-08-14T21:22:30.1159041Z * [new branch] gh/coconutruben/11/orig -> origin/gh/coconutruben/11/orig 2025-08-14T21:22:30.1164713Z * [new branch] gh/coconutruben/12/base -> origin/gh/coconutruben/12/base 2025-08-14T21:22:30.1164896Z * [new branch] gh/coconutruben/12/head -> origin/gh/coconutruben/12/head 2025-08-14T21:22:30.1165074Z * [new branch] gh/coconutruben/12/orig -> origin/gh/coconutruben/12/orig 2025-08-14T21:22:30.1165307Z * [new branch] gh/coconutruben/13/base -> origin/gh/coconutruben/13/base 2025-08-14T21:22:30.1165813Z * [new branch] gh/coconutruben/13/head -> origin/gh/coconutruben/13/head 2025-08-14T21:22:30.1166954Z * [new branch] gh/coconutruben/13/orig -> origin/gh/coconutruben/13/orig 2025-08-14T21:22:30.1168201Z * [new branch] gh/coconutruben/14/base -> origin/gh/coconutruben/14/base 2025-08-14T21:22:30.1169202Z * [new branch] gh/coconutruben/14/head -> origin/gh/coconutruben/14/head 2025-08-14T21:22:30.1170130Z * [new branch] gh/coconutruben/14/orig -> origin/gh/coconutruben/14/orig 2025-08-14T21:22:30.1171638Z * [new branch] gh/coconutruben/15/base -> origin/gh/coconutruben/15/base 2025-08-14T21:22:30.1173010Z * [new branch] gh/coconutruben/15/head -> origin/gh/coconutruben/15/head 2025-08-14T21:22:30.1174061Z * [new branch] gh/coconutruben/15/orig -> origin/gh/coconutruben/15/orig 2025-08-14T21:22:30.1179755Z * [new branch] gh/coconutruben/16/base -> origin/gh/coconutruben/16/base 2025-08-14T21:22:30.1180751Z * [new branch] gh/coconutruben/16/head -> origin/gh/coconutruben/16/head 2025-08-14T21:22:30.1181688Z * [new branch] gh/coconutruben/16/orig -> origin/gh/coconutruben/16/orig 2025-08-14T21:22:30.1183137Z * [new branch] gh/coconutruben/17/base -> origin/gh/coconutruben/17/base 2025-08-14T21:22:30.1184390Z * [new branch] gh/coconutruben/17/head -> origin/gh/coconutruben/17/head 2025-08-14T21:22:30.1185287Z * [new branch] gh/coconutruben/17/orig -> origin/gh/coconutruben/17/orig 2025-08-14T21:22:30.1186654Z * [new branch] gh/coconutruben/18/base -> origin/gh/coconutruben/18/base 2025-08-14T21:22:30.1187645Z * [new branch] gh/coconutruben/18/head -> origin/gh/coconutruben/18/head 2025-08-14T21:22:30.1188629Z * [new branch] gh/coconutruben/18/orig -> origin/gh/coconutruben/18/orig 2025-08-14T21:22:30.1195559Z * [new branch] gh/coconutruben/19/base -> origin/gh/coconutruben/19/base 2025-08-14T21:22:30.1195798Z * [new branch] gh/coconutruben/19/head -> origin/gh/coconutruben/19/head 2025-08-14T21:22:30.1196036Z * [new branch] gh/coconutruben/19/orig -> origin/gh/coconutruben/19/orig 2025-08-14T21:22:30.1196233Z * [new branch] gh/coconutruben/20/base -> origin/gh/coconutruben/20/base 2025-08-14T21:22:30.1196414Z * [new branch] gh/coconutruben/20/head -> origin/gh/coconutruben/20/head 2025-08-14T21:22:30.1196598Z * [new branch] gh/coconutruben/20/orig -> origin/gh/coconutruben/20/orig 2025-08-14T21:22:30.1197133Z * [new branch] gh/coconutruben/21/base -> origin/gh/coconutruben/21/base 2025-08-14T21:22:30.1198051Z * [new branch] gh/coconutruben/21/head -> origin/gh/coconutruben/21/head 2025-08-14T21:22:30.1199009Z * [new branch] gh/coconutruben/21/orig -> origin/gh/coconutruben/21/orig 2025-08-14T21:22:30.1200297Z * [new branch] gh/coconutruben/22/base -> origin/gh/coconutruben/22/base 2025-08-14T21:22:30.1201300Z * [new branch] gh/coconutruben/22/head -> origin/gh/coconutruben/22/head 2025-08-14T21:22:30.1202429Z * [new branch] gh/coconutruben/22/orig -> origin/gh/coconutruben/22/orig 2025-08-14T21:22:30.1203826Z * [new branch] gh/coconutruben/23/base -> origin/gh/coconutruben/23/base 2025-08-14T21:22:30.1213409Z * [new branch] gh/coconutruben/23/head -> origin/gh/coconutruben/23/head 2025-08-14T21:22:30.1214415Z * [new branch] gh/coconutruben/23/orig -> origin/gh/coconutruben/23/orig 2025-08-14T21:22:30.1215830Z * [new branch] gh/coconutruben/24/base -> origin/gh/coconutruben/24/base 2025-08-14T21:22:30.1216827Z * [new branch] gh/coconutruben/24/head -> origin/gh/coconutruben/24/head 2025-08-14T21:22:30.1217772Z * [new branch] gh/coconutruben/24/orig -> origin/gh/coconutruben/24/orig 2025-08-14T21:22:30.1222662Z * [new branch] gh/coconutruben/25/base -> origin/gh/coconutruben/25/base 2025-08-14T21:22:30.1223200Z * [new branch] gh/coconutruben/25/head -> origin/gh/coconutruben/25/head 2025-08-14T21:22:30.1224970Z * [new branch] gh/coconutruben/25/orig -> origin/gh/coconutruben/25/orig 2025-08-14T21:22:30.1226494Z * [new branch] gh/coconutruben/26/base -> origin/gh/coconutruben/26/base 2025-08-14T21:22:30.1227511Z * [new branch] gh/coconutruben/26/head -> origin/gh/coconutruben/26/head 2025-08-14T21:22:30.1228458Z * [new branch] gh/coconutruben/26/orig -> origin/gh/coconutruben/26/orig 2025-08-14T21:22:30.1229610Z * [new branch] gh/coconutruben/27/base -> origin/gh/coconutruben/27/base 2025-08-14T21:22:30.1230659Z * [new branch] gh/coconutruben/27/head -> origin/gh/coconutruben/27/head 2025-08-14T21:22:30.1231725Z * [new branch] gh/coconutruben/27/orig -> origin/gh/coconutruben/27/orig 2025-08-14T21:22:30.1233992Z * [new branch] gh/codingwithsurya/10/base -> origin/gh/codingwithsurya/10/base 2025-08-14T21:22:30.1235136Z * [new branch] gh/codingwithsurya/10/head -> origin/gh/codingwithsurya/10/head 2025-08-14T21:22:30.1236201Z * [new branch] gh/codingwithsurya/10/orig -> origin/gh/codingwithsurya/10/orig 2025-08-14T21:22:30.1237547Z * [new branch] gh/codingwithsurya/11/base -> origin/gh/codingwithsurya/11/base 2025-08-14T21:22:30.1238744Z * [new branch] gh/codingwithsurya/11/head -> origin/gh/codingwithsurya/11/head 2025-08-14T21:22:30.1239592Z * [new branch] gh/codingwithsurya/11/orig -> origin/gh/codingwithsurya/11/orig 2025-08-14T21:22:30.1241391Z * [new branch] gh/codingwithsurya/12/base -> origin/gh/codingwithsurya/12/base 2025-08-14T21:22:30.1242623Z * [new branch] gh/codingwithsurya/12/head -> origin/gh/codingwithsurya/12/head 2025-08-14T21:22:30.1243727Z * [new branch] gh/codingwithsurya/12/orig -> origin/gh/codingwithsurya/12/orig 2025-08-14T21:22:30.1245022Z * [new branch] gh/codingwithsurya/13/base -> origin/gh/codingwithsurya/13/base 2025-08-14T21:22:30.1246020Z * [new branch] gh/codingwithsurya/13/head -> origin/gh/codingwithsurya/13/head 2025-08-14T21:22:30.1251481Z * [new branch] gh/codingwithsurya/13/orig -> origin/gh/codingwithsurya/13/orig 2025-08-14T21:22:30.1256288Z * [new branch] gh/codingwithsurya/14/base -> origin/gh/codingwithsurya/14/base 2025-08-14T21:22:30.1256551Z * [new branch] gh/codingwithsurya/14/head -> origin/gh/codingwithsurya/14/head 2025-08-14T21:22:30.1256797Z * [new branch] gh/codingwithsurya/14/orig -> origin/gh/codingwithsurya/14/orig 2025-08-14T21:22:30.1256999Z * [new branch] gh/codingwithsurya/15/base -> origin/gh/codingwithsurya/15/base 2025-08-14T21:22:30.1257332Z * [new branch] gh/codingwithsurya/15/head -> origin/gh/codingwithsurya/15/head 2025-08-14T21:22:30.1258365Z * [new branch] gh/codingwithsurya/15/orig -> origin/gh/codingwithsurya/15/orig 2025-08-14T21:22:30.1259817Z * [new branch] gh/codingwithsurya/16/base -> origin/gh/codingwithsurya/16/base 2025-08-14T21:22:30.1260805Z * [new branch] gh/codingwithsurya/16/head -> origin/gh/codingwithsurya/16/head 2025-08-14T21:22:30.1261764Z * [new branch] gh/codingwithsurya/16/orig -> origin/gh/codingwithsurya/16/orig 2025-08-14T21:22:30.1264272Z * [new branch] gh/codingwithsurya/17/base -> origin/gh/codingwithsurya/17/base 2025-08-14T21:22:30.1264499Z * [new branch] gh/codingwithsurya/17/head -> origin/gh/codingwithsurya/17/head 2025-08-14T21:22:30.1265498Z * [new branch] gh/codingwithsurya/17/orig -> origin/gh/codingwithsurya/17/orig 2025-08-14T21:22:30.1266898Z * [new branch] gh/codingwithsurya/18/base -> origin/gh/codingwithsurya/18/base 2025-08-14T21:22:30.1267858Z * [new branch] gh/codingwithsurya/18/head -> origin/gh/codingwithsurya/18/head 2025-08-14T21:22:30.1268807Z * [new branch] gh/codingwithsurya/18/orig -> origin/gh/codingwithsurya/18/orig 2025-08-14T21:22:30.1270202Z * [new branch] gh/codingwithsurya/19/base -> origin/gh/codingwithsurya/19/base 2025-08-14T21:22:30.1271145Z * [new branch] gh/codingwithsurya/19/head -> origin/gh/codingwithsurya/19/head 2025-08-14T21:22:30.1272087Z * [new branch] gh/codingwithsurya/19/orig -> origin/gh/codingwithsurya/19/orig 2025-08-14T21:22:30.1273428Z * [new branch] gh/codingwithsurya/20/base -> origin/gh/codingwithsurya/20/base 2025-08-14T21:22:30.1274377Z * [new branch] gh/codingwithsurya/20/head -> origin/gh/codingwithsurya/20/head 2025-08-14T21:22:30.1275283Z * [new branch] gh/codingwithsurya/20/orig -> origin/gh/codingwithsurya/20/orig 2025-08-14T21:22:30.1281162Z * [new branch] gh/codingwithsurya/21/base -> origin/gh/codingwithsurya/21/base 2025-08-14T21:22:30.1282254Z * [new branch] gh/codingwithsurya/21/head -> origin/gh/codingwithsurya/21/head 2025-08-14T21:22:30.1283342Z * [new branch] gh/codingwithsurya/21/orig -> origin/gh/codingwithsurya/21/orig 2025-08-14T21:22:30.1284974Z * [new branch] gh/codingwithsurya/8/base -> origin/gh/codingwithsurya/8/base 2025-08-14T21:22:30.1285999Z * [new branch] gh/codingwithsurya/8/head -> origin/gh/codingwithsurya/8/head 2025-08-14T21:22:30.1286898Z * [new branch] gh/codingwithsurya/8/orig -> origin/gh/codingwithsurya/8/orig 2025-08-14T21:22:30.1288307Z * [new branch] gh/codingwithsurya/9/base -> origin/gh/codingwithsurya/9/base 2025-08-14T21:22:30.1289262Z * [new branch] gh/codingwithsurya/9/head -> origin/gh/codingwithsurya/9/head 2025-08-14T21:22:30.1290197Z * [new branch] gh/codingwithsurya/9/orig -> origin/gh/codingwithsurya/9/orig 2025-08-14T21:22:30.1295385Z * [new branch] gh/colinchan15/1/base -> origin/gh/colinchan15/1/base 2025-08-14T21:22:30.1295566Z * [new branch] gh/colinchan15/1/head -> origin/gh/colinchan15/1/head 2025-08-14T21:22:30.1295759Z * [new branch] gh/colinchan15/2/base -> origin/gh/colinchan15/2/base 2025-08-14T21:22:30.1295928Z * [new branch] gh/colinchan15/2/head -> origin/gh/colinchan15/2/head 2025-08-14T21:22:30.1296127Z * [new branch] gh/colinchan15/3/base -> origin/gh/colinchan15/3/base 2025-08-14T21:22:30.1297007Z * [new branch] gh/colinchan15/3/head -> origin/gh/colinchan15/3/head 2025-08-14T21:22:30.1298134Z * [new branch] gh/colinchan15/4/base -> origin/gh/colinchan15/4/base 2025-08-14T21:22:30.1298989Z * [new branch] gh/colinchan15/4/head -> origin/gh/colinchan15/4/head 2025-08-14T21:22:30.1300131Z * [new branch] gh/colinchan15/5/base -> origin/gh/colinchan15/5/base 2025-08-14T21:22:30.1301049Z * [new branch] gh/colinchan15/5/head -> origin/gh/colinchan15/5/head 2025-08-14T21:22:30.1302165Z * [new branch] gh/colinchan15/6/base -> origin/gh/colinchan15/6/base 2025-08-14T21:22:30.1303065Z * [new branch] gh/colinchan15/6/head -> origin/gh/colinchan15/6/head 2025-08-14T21:22:30.1304631Z * [new branch] gh/davidberard98/351/base -> origin/gh/davidberard98/351/base 2025-08-14T21:22:30.1305686Z * [new branch] gh/davidberard98/351/head -> origin/gh/davidberard98/351/head 2025-08-14T21:22:30.1310933Z * [new branch] gh/davidberard98/351/orig -> origin/gh/davidberard98/351/orig 2025-08-14T21:22:30.1312154Z * [new branch] gh/davidberard98/353/base -> origin/gh/davidberard98/353/base 2025-08-14T21:22:30.1313065Z * [new branch] gh/davidberard98/353/head -> origin/gh/davidberard98/353/head 2025-08-14T21:22:30.1313993Z * [new branch] gh/davidberard98/353/orig -> origin/gh/davidberard98/353/orig 2025-08-14T21:22:30.1315270Z * [new branch] gh/davidberard98/356/base -> origin/gh/davidberard98/356/base 2025-08-14T21:22:30.1316205Z * [new branch] gh/davidberard98/356/head -> origin/gh/davidberard98/356/head 2025-08-14T21:22:30.1317149Z * [new branch] gh/davidberard98/356/orig -> origin/gh/davidberard98/356/orig 2025-08-14T21:22:30.1318550Z * [new branch] gh/davidberard98/382/base -> origin/gh/davidberard98/382/base 2025-08-14T21:22:30.1319673Z * [new branch] gh/davidberard98/382/head -> origin/gh/davidberard98/382/head 2025-08-14T21:22:30.1324308Z * [new branch] gh/davidberard98/382/orig -> origin/gh/davidberard98/382/orig 2025-08-14T21:22:30.1324494Z * [new branch] gh/davidberard98/386/base -> origin/gh/davidberard98/386/base 2025-08-14T21:22:30.1324686Z * [new branch] gh/davidberard98/386/head -> origin/gh/davidberard98/386/head 2025-08-14T21:22:30.1324865Z * [new branch] gh/davidberard98/386/orig -> origin/gh/davidberard98/386/orig 2025-08-14T21:22:30.1325565Z * [new branch] gh/davidberard98/389/base -> origin/gh/davidberard98/389/base 2025-08-14T21:22:30.1326465Z * [new branch] gh/davidberard98/389/head -> origin/gh/davidberard98/389/head 2025-08-14T21:22:30.1327373Z * [new branch] gh/davidberard98/389/orig -> origin/gh/davidberard98/389/orig 2025-08-14T21:22:30.1328766Z * [new branch] gh/davidberard98/390/base -> origin/gh/davidberard98/390/base 2025-08-14T21:22:30.1329631Z * [new branch] gh/davidberard98/390/head -> origin/gh/davidberard98/390/head 2025-08-14T21:22:30.1330576Z * [new branch] gh/davidberard98/390/orig -> origin/gh/davidberard98/390/orig 2025-08-14T21:22:30.1331822Z * [new branch] gh/davidberard98/391/base -> origin/gh/davidberard98/391/base 2025-08-14T21:22:30.1332787Z * [new branch] gh/davidberard98/391/head -> origin/gh/davidberard98/391/head 2025-08-14T21:22:30.1333665Z * [new branch] gh/davidberard98/391/orig -> origin/gh/davidberard98/391/orig 2025-08-14T21:22:30.1339397Z * [new branch] gh/davidberard98/392/base -> origin/gh/davidberard98/392/base 2025-08-14T21:22:30.1340321Z * [new branch] gh/davidberard98/392/head -> origin/gh/davidberard98/392/head 2025-08-14T21:22:30.1341245Z * [new branch] gh/davidberard98/392/orig -> origin/gh/davidberard98/392/orig 2025-08-14T21:22:30.1342560Z * [new branch] gh/davidberard98/393/base -> origin/gh/davidberard98/393/base 2025-08-14T21:22:30.1343572Z * [new branch] gh/davidberard98/393/head -> origin/gh/davidberard98/393/head 2025-08-14T21:22:30.1344505Z * [new branch] gh/davidberard98/393/orig -> origin/gh/davidberard98/393/orig 2025-08-14T21:22:30.1345932Z * [new branch] gh/davidberard98/394/base -> origin/gh/davidberard98/394/base 2025-08-14T21:22:30.1346900Z * [new branch] gh/davidberard98/394/head -> origin/gh/davidberard98/394/head 2025-08-14T21:22:30.1347886Z * [new branch] gh/davidberard98/394/orig -> origin/gh/davidberard98/394/orig 2025-08-14T21:22:30.1353696Z * [new branch] gh/davidberard98/395/base -> origin/gh/davidberard98/395/base 2025-08-14T21:22:30.1353890Z * [new branch] gh/davidberard98/395/head -> origin/gh/davidberard98/395/head 2025-08-14T21:22:30.1354084Z * [new branch] gh/davidberard98/395/orig -> origin/gh/davidberard98/395/orig 2025-08-14T21:22:30.1354269Z * [new branch] gh/davidberard98/396/base -> origin/gh/davidberard98/396/base 2025-08-14T21:22:30.1354457Z * [new branch] gh/davidberard98/396/head -> origin/gh/davidberard98/396/head 2025-08-14T21:22:30.1354936Z * [new branch] gh/davidberard98/396/orig -> origin/gh/davidberard98/396/orig 2025-08-14T21:22:30.1356527Z * [new branch] gh/davidberard98/397/base -> origin/gh/davidberard98/397/base 2025-08-14T21:22:30.1357445Z * [new branch] gh/davidberard98/397/head -> origin/gh/davidberard98/397/head 2025-08-14T21:22:30.1358382Z * [new branch] gh/davidberard98/397/orig -> origin/gh/davidberard98/397/orig 2025-08-14T21:22:30.1359848Z * [new branch] gh/davidberard98/398/base -> origin/gh/davidberard98/398/base 2025-08-14T21:22:30.1360586Z * [new branch] gh/davidberard98/398/head -> origin/gh/davidberard98/398/head 2025-08-14T21:22:30.1361643Z * [new branch] gh/davidberard98/398/orig -> origin/gh/davidberard98/398/orig 2025-08-14T21:22:30.1363254Z * [new branch] gh/desertfire/570/base -> origin/gh/desertfire/570/base 2025-08-14T21:22:30.1372745Z * [new branch] gh/desertfire/570/head -> origin/gh/desertfire/570/head 2025-08-14T21:22:30.1373729Z * [new branch] gh/desertfire/570/orig -> origin/gh/desertfire/570/orig 2025-08-14T21:22:30.1374903Z * [new branch] gh/desertfire/572/base -> origin/gh/desertfire/572/base 2025-08-14T21:22:30.1375967Z * [new branch] gh/desertfire/572/head -> origin/gh/desertfire/572/head 2025-08-14T21:22:30.1376911Z * [new branch] gh/desertfire/572/orig -> origin/gh/desertfire/572/orig 2025-08-14T21:22:30.1386551Z * [new branch] gh/desertfire/589/base -> origin/gh/desertfire/589/base 2025-08-14T21:22:30.1386788Z * [new branch] gh/desertfire/589/head -> origin/gh/desertfire/589/head 2025-08-14T21:22:30.1387007Z * [new branch] gh/desertfire/589/orig -> origin/gh/desertfire/589/orig 2025-08-14T21:22:30.1387231Z * [new branch] gh/desertfire/590/base -> origin/gh/desertfire/590/base 2025-08-14T21:22:30.1387448Z * [new branch] gh/desertfire/590/head -> origin/gh/desertfire/590/head 2025-08-14T21:22:30.1387674Z * [new branch] gh/desertfire/590/orig -> origin/gh/desertfire/590/orig 2025-08-14T21:22:30.1387902Z * [new branch] gh/desertfire/591/base -> origin/gh/desertfire/591/base 2025-08-14T21:22:30.1388436Z * [new branch] gh/desertfire/591/head -> origin/gh/desertfire/591/head 2025-08-14T21:22:30.1389464Z * [new branch] gh/desertfire/591/orig -> origin/gh/desertfire/591/orig 2025-08-14T21:22:30.1390703Z * [new branch] gh/desertfire/592/base -> origin/gh/desertfire/592/base 2025-08-14T21:22:30.1391607Z * [new branch] gh/desertfire/592/head -> origin/gh/desertfire/592/head 2025-08-14T21:22:30.1392796Z * [new branch] gh/desertfire/592/orig -> origin/gh/desertfire/592/orig 2025-08-14T21:22:30.1394143Z * [new branch] gh/desertfire/593/base -> origin/gh/desertfire/593/base 2025-08-14T21:22:30.1395041Z * [new branch] gh/desertfire/593/head -> origin/gh/desertfire/593/head 2025-08-14T21:22:30.1396072Z * [new branch] gh/desertfire/593/orig -> origin/gh/desertfire/593/orig 2025-08-14T21:22:30.1397726Z * [new branch] gh/desertfire/594/base -> origin/gh/desertfire/594/base 2025-08-14T21:22:30.1398561Z * [new branch] gh/desertfire/594/head -> origin/gh/desertfire/594/head 2025-08-14T21:22:30.1399620Z * [new branch] gh/desertfire/594/orig -> origin/gh/desertfire/594/orig 2025-08-14T21:22:30.1400836Z * [new branch] gh/desertfire/595/base -> origin/gh/desertfire/595/base 2025-08-14T21:22:30.1401893Z * [new branch] gh/desertfire/595/head -> origin/gh/desertfire/595/head 2025-08-14T21:22:30.1402828Z * [new branch] gh/desertfire/595/orig -> origin/gh/desertfire/595/orig 2025-08-14T21:22:30.1404074Z * [new branch] gh/desertfire/596/base -> origin/gh/desertfire/596/base 2025-08-14T21:22:30.1404972Z * [new branch] gh/desertfire/596/head -> origin/gh/desertfire/596/head 2025-08-14T21:22:30.1406120Z * [new branch] gh/desertfire/596/orig -> origin/gh/desertfire/596/orig 2025-08-14T21:22:30.1412190Z * [new branch] gh/desertfire/597/base -> origin/gh/desertfire/597/base 2025-08-14T21:22:30.1415641Z * [new branch] gh/desertfire/597/head -> origin/gh/desertfire/597/head 2025-08-14T21:22:30.1415873Z * [new branch] gh/desertfire/597/orig -> origin/gh/desertfire/597/orig 2025-08-14T21:22:30.1416095Z * [new branch] gh/dharakk/1/base -> origin/gh/dharakk/1/base 2025-08-14T21:22:30.1416549Z * [new branch] gh/dharakk/1/head -> origin/gh/dharakk/1/head 2025-08-14T21:22:30.1418352Z * [new branch] gh/dharakk/4/base -> origin/gh/dharakk/4/base 2025-08-14T21:22:30.1419239Z * [new branch] gh/dharakk/4/head -> origin/gh/dharakk/4/head 2025-08-14T21:22:30.1420256Z * [new branch] gh/dharakk/4/orig -> origin/gh/dharakk/4/orig 2025-08-14T21:22:30.1428078Z * [new branch] gh/drisspg/140/base -> origin/gh/drisspg/140/base 2025-08-14T21:22:30.1428298Z * [new branch] gh/drisspg/140/head -> origin/gh/drisspg/140/head 2025-08-14T21:22:30.1428515Z * [new branch] gh/drisspg/140/orig -> origin/gh/drisspg/140/orig 2025-08-14T21:22:30.1428717Z * [new branch] gh/drisspg/149/base -> origin/gh/drisspg/149/base 2025-08-14T21:22:30.1428924Z * [new branch] gh/drisspg/149/head -> origin/gh/drisspg/149/head 2025-08-14T21:22:30.1429124Z * [new branch] gh/drisspg/149/orig -> origin/gh/drisspg/149/orig 2025-08-14T21:22:30.1429328Z * [new branch] gh/drisspg/150/base -> origin/gh/drisspg/150/base 2025-08-14T21:22:30.1429635Z * [new branch] gh/drisspg/150/head -> origin/gh/drisspg/150/head 2025-08-14T21:22:30.1430348Z * [new branch] gh/drisspg/150/orig -> origin/gh/drisspg/150/orig 2025-08-14T21:22:30.1431667Z * [new branch] gh/drisspg/151/base -> origin/gh/drisspg/151/base 2025-08-14T21:22:30.1433025Z * [new branch] gh/drisspg/151/head -> origin/gh/drisspg/151/head 2025-08-14T21:22:30.1433973Z * [new branch] gh/drisspg/151/orig -> origin/gh/drisspg/151/orig 2025-08-14T21:22:30.1435231Z * [new branch] gh/drisspg/158/base -> origin/gh/drisspg/158/base 2025-08-14T21:22:30.1440627Z * [new branch] gh/drisspg/158/head -> origin/gh/drisspg/158/head 2025-08-14T21:22:30.1441709Z * [new branch] gh/drisspg/158/orig -> origin/gh/drisspg/158/orig 2025-08-14T21:22:30.1442971Z * [new branch] gh/drisspg/159/base -> origin/gh/drisspg/159/base 2025-08-14T21:22:30.1443867Z * [new branch] gh/drisspg/159/head -> origin/gh/drisspg/159/head 2025-08-14T21:22:30.1445138Z * [new branch] gh/drisspg/159/orig -> origin/gh/drisspg/159/orig 2025-08-14T21:22:30.1446157Z * [new branch] gh/drisspg/166/base -> origin/gh/drisspg/166/base 2025-08-14T21:22:30.1447119Z * [new branch] gh/drisspg/166/head -> origin/gh/drisspg/166/head 2025-08-14T21:22:30.1448360Z * [new branch] gh/drisspg/166/orig -> origin/gh/drisspg/166/orig 2025-08-14T21:22:30.1450192Z * [new branch] gh/drisspg/168/base -> origin/gh/drisspg/168/base 2025-08-14T21:22:30.1456925Z * [new branch] gh/drisspg/168/head -> origin/gh/drisspg/168/head 2025-08-14T21:22:30.1457445Z * [new branch] gh/drisspg/168/orig -> origin/gh/drisspg/168/orig 2025-08-14T21:22:30.1457845Z * [new branch] gh/drisspg/169/base -> origin/gh/drisspg/169/base 2025-08-14T21:22:30.1458419Z * [new branch] gh/drisspg/169/head -> origin/gh/drisspg/169/head 2025-08-14T21:22:30.1458976Z * [new branch] gh/drisspg/169/orig -> origin/gh/drisspg/169/orig 2025-08-14T21:22:30.1459511Z * [new branch] gh/drisspg/170/base -> origin/gh/drisspg/170/base 2025-08-14T21:22:30.1459954Z * [new branch] gh/drisspg/170/head -> origin/gh/drisspg/170/head 2025-08-14T21:22:30.1460462Z * [new branch] gh/drisspg/170/orig -> origin/gh/drisspg/170/orig 2025-08-14T21:22:30.1460851Z * [new branch] gh/drisspg/171/base -> origin/gh/drisspg/171/base 2025-08-14T21:22:30.1461238Z * [new branch] gh/drisspg/171/head -> origin/gh/drisspg/171/head 2025-08-14T21:22:30.1461626Z * [new branch] gh/drisspg/171/orig -> origin/gh/drisspg/171/orig 2025-08-14T21:22:30.1462904Z * [new branch] gh/drisspg/172/base -> origin/gh/drisspg/172/base 2025-08-14T21:22:30.1463722Z * [new branch] gh/drisspg/172/head -> origin/gh/drisspg/172/head 2025-08-14T21:22:30.1464795Z * [new branch] gh/drisspg/172/orig -> origin/gh/drisspg/172/orig 2025-08-14T21:22:30.1470539Z * [new branch] gh/drisspg/173/base -> origin/gh/drisspg/173/base 2025-08-14T21:22:30.1471244Z * [new branch] gh/drisspg/173/head -> origin/gh/drisspg/173/head 2025-08-14T21:22:30.1472162Z * [new branch] gh/drisspg/173/orig -> origin/gh/drisspg/173/orig 2025-08-14T21:22:30.1473481Z * [new branch] gh/drisspg/174/base -> origin/gh/drisspg/174/base 2025-08-14T21:22:30.1474371Z * [new branch] gh/drisspg/174/head -> origin/gh/drisspg/174/head 2025-08-14T21:22:30.1475403Z * [new branch] gh/drisspg/174/orig -> origin/gh/drisspg/174/orig 2025-08-14T21:22:30.1476984Z * [new branch] gh/drisspg/175/base -> origin/gh/drisspg/175/base 2025-08-14T21:22:30.1477873Z * [new branch] gh/drisspg/175/head -> origin/gh/drisspg/175/head 2025-08-14T21:22:30.1478793Z * [new branch] gh/drisspg/175/orig -> origin/gh/drisspg/175/orig 2025-08-14T21:22:30.1484021Z * [new branch] gh/drisspg/176/base -> origin/gh/drisspg/176/base 2025-08-14T21:22:30.1484419Z * [new branch] gh/drisspg/176/head -> origin/gh/drisspg/176/head 2025-08-14T21:22:30.1484817Z * [new branch] gh/drisspg/176/orig -> origin/gh/drisspg/176/orig 2025-08-14T21:22:30.1485212Z * [new branch] gh/drisspg/177/base -> origin/gh/drisspg/177/base 2025-08-14T21:22:30.1485597Z * [new branch] gh/drisspg/177/head -> origin/gh/drisspg/177/head 2025-08-14T21:22:30.1486118Z * [new branch] gh/drisspg/177/orig -> origin/gh/drisspg/177/orig 2025-08-14T21:22:30.1487463Z * [new branch] gh/drisspg/178/base -> origin/gh/drisspg/178/base 2025-08-14T21:22:30.1488346Z * [new branch] gh/drisspg/178/head -> origin/gh/drisspg/178/head 2025-08-14T21:22:30.1489245Z * [new branch] gh/drisspg/178/orig -> origin/gh/drisspg/178/orig 2025-08-14T21:22:30.1490566Z * [new branch] gh/drisspg/179/base -> origin/gh/drisspg/179/base 2025-08-14T21:22:30.1491419Z * [new branch] gh/drisspg/179/head -> origin/gh/drisspg/179/head 2025-08-14T21:22:30.1492333Z * [new branch] gh/drisspg/179/orig -> origin/gh/drisspg/179/orig 2025-08-14T21:22:30.1493701Z * [new branch] gh/drisspg/180/base -> origin/gh/drisspg/180/base 2025-08-14T21:22:30.1503225Z * [new branch] gh/drisspg/180/head -> origin/gh/drisspg/180/head 2025-08-14T21:22:30.1504587Z * [new branch] gh/drisspg/180/orig -> origin/gh/drisspg/180/orig 2025-08-14T21:22:30.1505817Z * [new branch] gh/drisspg/181/base -> origin/gh/drisspg/181/base 2025-08-14T21:22:30.1506753Z * [new branch] gh/drisspg/181/head -> origin/gh/drisspg/181/head 2025-08-14T21:22:30.1507598Z * [new branch] gh/drisspg/181/orig -> origin/gh/drisspg/181/orig 2025-08-14T21:22:30.1512845Z * [new branch] gh/drisspg/182/base -> origin/gh/drisspg/182/base 2025-08-14T21:22:30.1513245Z * [new branch] gh/drisspg/182/head -> origin/gh/drisspg/182/head 2025-08-14T21:22:30.1513645Z * [new branch] gh/drisspg/183/base -> origin/gh/drisspg/183/base 2025-08-14T21:22:30.1514031Z * [new branch] gh/drisspg/183/head -> origin/gh/drisspg/183/head 2025-08-14T21:22:30.1514426Z * [new branch] gh/drisspg/184/base -> origin/gh/drisspg/184/base 2025-08-14T21:22:30.1514817Z * [new branch] gh/drisspg/184/head -> origin/gh/drisspg/184/head 2025-08-14T21:22:30.1515928Z * [new branch] gh/drisspg/185/base -> origin/gh/drisspg/185/base 2025-08-14T21:22:30.1516788Z * [new branch] gh/drisspg/185/head -> origin/gh/drisspg/185/head 2025-08-14T21:22:30.1518574Z * [new branch] gh/dsjohns2/1/base -> origin/gh/dsjohns2/1/base 2025-08-14T21:22:30.1519364Z * [new branch] gh/dsjohns2/1/head -> origin/gh/dsjohns2/1/head 2025-08-14T21:22:30.1520978Z * [new branch] gh/eellison/784/base -> origin/gh/eellison/784/base 2025-08-14T21:22:30.1522019Z * [new branch] gh/eellison/784/head -> origin/gh/eellison/784/head 2025-08-14T21:22:30.1523073Z * [new branch] gh/eellison/784/orig -> origin/gh/eellison/784/orig 2025-08-14T21:22:30.1524742Z * [new branch] gh/eellison/785/base -> origin/gh/eellison/785/base 2025-08-14T21:22:30.1525619Z * [new branch] gh/eellison/785/head -> origin/gh/eellison/785/head 2025-08-14T21:22:30.1526560Z * [new branch] gh/eellison/785/orig -> origin/gh/eellison/785/orig 2025-08-14T21:22:30.1527949Z * [new branch] gh/eellison/789/base -> origin/gh/eellison/789/base 2025-08-14T21:22:30.1528824Z * [new branch] gh/eellison/789/head -> origin/gh/eellison/789/head 2025-08-14T21:22:30.1529752Z * [new branch] gh/eellison/789/orig -> origin/gh/eellison/789/orig 2025-08-14T21:22:30.1531107Z * [new branch] gh/eellison/800/base -> origin/gh/eellison/800/base 2025-08-14T21:22:30.1531959Z * [new branch] gh/eellison/800/head -> origin/gh/eellison/800/head 2025-08-14T21:22:30.1532880Z * [new branch] gh/eellison/800/orig -> origin/gh/eellison/800/orig 2025-08-14T21:22:30.1534297Z * [new branch] gh/eellison/801/base -> origin/gh/eellison/801/base 2025-08-14T21:22:30.1535167Z * [new branch] gh/eellison/801/head -> origin/gh/eellison/801/head 2025-08-14T21:22:30.1536067Z * [new branch] gh/eellison/801/orig -> origin/gh/eellison/801/orig 2025-08-14T21:22:30.1546107Z * [new branch] gh/eellison/802/base -> origin/gh/eellison/802/base 2025-08-14T21:22:30.1546727Z * [new branch] gh/eellison/802/head -> origin/gh/eellison/802/head 2025-08-14T21:22:30.1547209Z * [new branch] gh/eellison/802/orig -> origin/gh/eellison/802/orig 2025-08-14T21:22:30.1547605Z * [new branch] gh/eellison/805/base -> origin/gh/eellison/805/base 2025-08-14T21:22:30.1548002Z * [new branch] gh/eellison/805/head -> origin/gh/eellison/805/head 2025-08-14T21:22:30.1549131Z * [new branch] gh/eellison/805/orig -> origin/gh/eellison/805/orig 2025-08-14T21:22:30.1550737Z * [new branch] gh/eellison/808/base -> origin/gh/eellison/808/base 2025-08-14T21:22:30.1551573Z * [new branch] gh/eellison/808/head -> origin/gh/eellison/808/head 2025-08-14T21:22:30.1554141Z * [new branch] gh/eellison/808/orig -> origin/gh/eellison/808/orig 2025-08-14T21:22:30.1554544Z * [new branch] gh/eellison/809/base -> origin/gh/eellison/809/base 2025-08-14T21:22:30.1554947Z * [new branch] gh/eellison/809/head -> origin/gh/eellison/809/head 2025-08-14T21:22:30.1555759Z * [new branch] gh/eellison/809/orig -> origin/gh/eellison/809/orig 2025-08-14T21:22:30.1557128Z * [new branch] gh/eellison/810/base -> origin/gh/eellison/810/base 2025-08-14T21:22:30.1558025Z * [new branch] gh/eellison/810/head -> origin/gh/eellison/810/head 2025-08-14T21:22:30.1558931Z * [new branch] gh/eellison/810/orig -> origin/gh/eellison/810/orig 2025-08-14T21:22:30.1560807Z * [new branch] gh/eellison/811/base -> origin/gh/eellison/811/base 2025-08-14T21:22:30.1561671Z * [new branch] gh/eellison/811/head -> origin/gh/eellison/811/head 2025-08-14T21:22:30.1562594Z * [new branch] gh/eellison/811/orig -> origin/gh/eellison/811/orig 2025-08-14T21:22:30.1564141Z * [new branch] gh/eellison/812/base -> origin/gh/eellison/812/base 2025-08-14T21:22:30.1564871Z * [new branch] gh/eellison/812/head -> origin/gh/eellison/812/head 2025-08-14T21:22:30.1565808Z * [new branch] gh/eellison/812/orig -> origin/gh/eellison/812/orig 2025-08-14T21:22:30.1571416Z * [new branch] gh/eellison/813/base -> origin/gh/eellison/813/base 2025-08-14T21:22:30.1572281Z * [new branch] gh/eellison/813/head -> origin/gh/eellison/813/head 2025-08-14T21:22:30.1573187Z * [new branch] gh/eellison/813/orig -> origin/gh/eellison/813/orig 2025-08-14T21:22:30.1575034Z * [new branch] gh/etaf/132/base -> origin/gh/etaf/132/base 2025-08-14T21:22:30.1575667Z * [new branch] gh/etaf/132/head -> origin/gh/etaf/132/head 2025-08-14T21:22:30.1576543Z * [new branch] gh/etaf/132/orig -> origin/gh/etaf/132/orig 2025-08-14T21:22:30.1577922Z * [new branch] gh/etaf/138/base -> origin/gh/etaf/138/base 2025-08-14T21:22:30.1578950Z * [new branch] gh/etaf/138/head -> origin/gh/etaf/138/head 2025-08-14T21:22:30.1579847Z * [new branch] gh/etaf/138/orig -> origin/gh/etaf/138/orig 2025-08-14T21:22:30.1585092Z * [new branch] gh/etaf/140/base -> origin/gh/etaf/140/base 2025-08-14T21:22:30.1585523Z * [new branch] gh/etaf/140/head -> origin/gh/etaf/140/head 2025-08-14T21:22:30.1585890Z * [new branch] gh/etaf/140/orig -> origin/gh/etaf/140/orig 2025-08-14T21:22:30.1586250Z * [new branch] gh/etaf/143/base -> origin/gh/etaf/143/base 2025-08-14T21:22:30.1586617Z * [new branch] gh/etaf/143/head -> origin/gh/etaf/143/head 2025-08-14T21:22:30.1586979Z * [new branch] gh/etaf/143/orig -> origin/gh/etaf/143/orig 2025-08-14T21:22:30.1588130Z * [new branch] gh/etaf/147/base -> origin/gh/etaf/147/base 2025-08-14T21:22:30.1588986Z * [new branch] gh/etaf/147/head -> origin/gh/etaf/147/head 2025-08-14T21:22:30.1590314Z * [new branch] gh/etaf/148/base -> origin/gh/etaf/148/base 2025-08-14T21:22:30.1591151Z * [new branch] gh/etaf/148/head -> origin/gh/etaf/148/head 2025-08-14T21:22:30.1592039Z * [new branch] gh/etaf/148/orig -> origin/gh/etaf/148/orig 2025-08-14T21:22:30.1593395Z * [new branch] gh/etaf/149/base -> origin/gh/etaf/149/base 2025-08-14T21:22:30.1594403Z * [new branch] gh/etaf/149/head -> origin/gh/etaf/149/head 2025-08-14T21:22:30.1603768Z * [new branch] gh/etaf/149/orig -> origin/gh/etaf/149/orig 2025-08-14T21:22:30.1604240Z * [new branch] gh/etaf/150/base -> origin/gh/etaf/150/base 2025-08-14T21:22:30.1604720Z * [new branch] gh/etaf/150/head -> origin/gh/etaf/150/head 2025-08-14T21:22:30.1605210Z * [new branch] gh/etaf/150/orig -> origin/gh/etaf/150/orig 2025-08-14T21:22:30.1605634Z * [new branch] gh/etaf/151/base -> origin/gh/etaf/151/base 2025-08-14T21:22:30.1605994Z * [new branch] gh/etaf/151/head -> origin/gh/etaf/151/head 2025-08-14T21:22:30.1606374Z * [new branch] gh/etaf/151/orig -> origin/gh/etaf/151/orig 2025-08-14T21:22:30.1607712Z * [new branch] gh/etaf/152/base -> origin/gh/etaf/152/base 2025-08-14T21:22:30.1608670Z * [new branch] gh/etaf/152/head -> origin/gh/etaf/152/head 2025-08-14T21:22:30.1609627Z * [new branch] gh/etaf/152/orig -> origin/gh/etaf/152/orig 2025-08-14T21:22:30.1614404Z * [new branch] gh/etaf/153/base -> origin/gh/etaf/153/base 2025-08-14T21:22:30.1614964Z * [new branch] gh/etaf/153/head -> origin/gh/etaf/153/head 2025-08-14T21:22:30.1615358Z * [new branch] gh/etaf/153/orig -> origin/gh/etaf/153/orig 2025-08-14T21:22:30.1615731Z * [new branch] gh/etaf/154/base -> origin/gh/etaf/154/base 2025-08-14T21:22:30.1616102Z * [new branch] gh/etaf/154/head -> origin/gh/etaf/154/head 2025-08-14T21:22:30.1616464Z * [new branch] gh/etaf/154/orig -> origin/gh/etaf/154/orig 2025-08-14T21:22:30.1617654Z * [new branch] gh/etaf/155/base -> origin/gh/etaf/155/base 2025-08-14T21:22:30.1618547Z * [new branch] gh/etaf/155/head -> origin/gh/etaf/155/head 2025-08-14T21:22:30.1619460Z * [new branch] gh/etaf/155/orig -> origin/gh/etaf/155/orig 2025-08-14T21:22:30.1621119Z * [new branch] gh/ezyang/2374/base -> origin/gh/ezyang/2374/base 2025-08-14T21:22:30.1622036Z * [new branch] gh/ezyang/2374/head -> origin/gh/ezyang/2374/head 2025-08-14T21:22:30.1622974Z * [new branch] gh/ezyang/2374/orig -> origin/gh/ezyang/2374/orig 2025-08-14T21:22:30.1624523Z * [new branch] gh/ezyang/2973/base -> origin/gh/ezyang/2973/base 2025-08-14T21:22:30.1629793Z * [new branch] gh/ezyang/2973/head -> origin/gh/ezyang/2973/head 2025-08-14T21:22:30.1630584Z * [new branch] gh/ezyang/2973/orig -> origin/gh/ezyang/2973/orig 2025-08-14T21:22:30.1631870Z * [new branch] gh/ezyang/2974/base -> origin/gh/ezyang/2974/base 2025-08-14T21:22:30.1632727Z * [new branch] gh/ezyang/2974/head -> origin/gh/ezyang/2974/head 2025-08-14T21:22:30.1633618Z * [new branch] gh/ezyang/2974/orig -> origin/gh/ezyang/2974/orig 2025-08-14T21:22:30.1634947Z * [new branch] gh/ezyang/3068/base -> origin/gh/ezyang/3068/base 2025-08-14T21:22:30.1635788Z * [new branch] gh/ezyang/3068/head -> origin/gh/ezyang/3068/head 2025-08-14T21:22:30.1636796Z * [new branch] gh/ezyang/3068/orig -> origin/gh/ezyang/3068/orig 2025-08-14T21:22:30.1638078Z * [new branch] gh/ezyang/3071/base -> origin/gh/ezyang/3071/base 2025-08-14T21:22:30.1645337Z * [new branch] gh/ezyang/3071/head -> origin/gh/ezyang/3071/head 2025-08-14T21:22:30.1645847Z * [new branch] gh/ezyang/3071/orig -> origin/gh/ezyang/3071/orig 2025-08-14T21:22:30.1646344Z * [new branch] gh/ezyang/3074/base -> origin/gh/ezyang/3074/base 2025-08-14T21:22:30.1646838Z * [new branch] gh/ezyang/3074/head -> origin/gh/ezyang/3074/head 2025-08-14T21:22:30.1647338Z * [new branch] gh/ezyang/3074/orig -> origin/gh/ezyang/3074/orig 2025-08-14T21:22:30.1647845Z * [new branch] gh/ezyang/3088/base -> origin/gh/ezyang/3088/base 2025-08-14T21:22:30.1648353Z * [new branch] gh/ezyang/3088/head -> origin/gh/ezyang/3088/head 2025-08-14T21:22:30.1649134Z * [new branch] gh/ezyang/3088/orig -> origin/gh/ezyang/3088/orig 2025-08-14T21:22:30.1649519Z * [new branch] gh/ezyang/3092/base -> origin/gh/ezyang/3092/base 2025-08-14T21:22:30.1649906Z * [new branch] gh/ezyang/3092/head -> origin/gh/ezyang/3092/head 2025-08-14T21:22:30.1650291Z * [new branch] gh/ezyang/3092/orig -> origin/gh/ezyang/3092/orig 2025-08-14T21:22:30.1651364Z * [new branch] gh/ezyang/3097/base -> origin/gh/ezyang/3097/base 2025-08-14T21:22:30.1652229Z * [new branch] gh/ezyang/3097/head -> origin/gh/ezyang/3097/head 2025-08-14T21:22:30.1657371Z * [new branch] gh/ezyang/3097/orig -> origin/gh/ezyang/3097/orig 2025-08-14T21:22:30.1663076Z * [new branch] gh/ezyang/3098/base -> origin/gh/ezyang/3098/base 2025-08-14T21:22:30.1664218Z * [new branch] gh/ezyang/3098/head -> origin/gh/ezyang/3098/head 2025-08-14T21:22:30.1664998Z * [new branch] gh/ezyang/3098/orig -> origin/gh/ezyang/3098/orig 2025-08-14T21:22:30.1666267Z * [new branch] gh/ezyang/3099/base -> origin/gh/ezyang/3099/base 2025-08-14T21:22:30.1667151Z * [new branch] gh/ezyang/3099/head -> origin/gh/ezyang/3099/head 2025-08-14T21:22:30.1672219Z * [new branch] gh/ezyang/3099/orig -> origin/gh/ezyang/3099/orig 2025-08-14T21:22:30.1672605Z * [new branch] gh/ezyang/3100/base -> origin/gh/ezyang/3100/base 2025-08-14T21:22:30.1672996Z * [new branch] gh/ezyang/3100/head -> origin/gh/ezyang/3100/head 2025-08-14T21:22:30.1673623Z * [new branch] gh/ezyang/3100/orig -> origin/gh/ezyang/3100/orig 2025-08-14T21:22:30.1676515Z * [new branch] gh/ezyang/3101/base -> origin/gh/ezyang/3101/base 2025-08-14T21:22:30.1677033Z * [new branch] gh/ezyang/3101/head -> origin/gh/ezyang/3101/head 2025-08-14T21:22:30.1677422Z * [new branch] gh/ezyang/3101/orig -> origin/gh/ezyang/3101/orig 2025-08-14T21:22:30.1678075Z * [new branch] gh/ezyang/3102/base -> origin/gh/ezyang/3102/base 2025-08-14T21:22:30.1678989Z * [new branch] gh/ezyang/3102/head -> origin/gh/ezyang/3102/head 2025-08-14T21:22:30.1679906Z * [new branch] gh/ezyang/3102/orig -> origin/gh/ezyang/3102/orig 2025-08-14T21:22:30.1681226Z * [new branch] gh/ezyang/3103/base -> origin/gh/ezyang/3103/base 2025-08-14T21:22:30.1682619Z * [new branch] gh/ezyang/3103/head -> origin/gh/ezyang/3103/head 2025-08-14T21:22:30.1683506Z * [new branch] gh/ezyang/3103/orig -> origin/gh/ezyang/3103/orig 2025-08-14T21:22:30.1686570Z * [new branch] gh/ezyang/3104/base -> origin/gh/ezyang/3104/base 2025-08-14T21:22:30.1687170Z * [new branch] gh/ezyang/3104/head -> origin/gh/ezyang/3104/head 2025-08-14T21:22:30.1687570Z * [new branch] gh/ezyang/3104/orig -> origin/gh/ezyang/3104/orig 2025-08-14T21:22:30.1688450Z * [new branch] gh/ezyang/3105/base -> origin/gh/ezyang/3105/base 2025-08-14T21:22:30.1689393Z * [new branch] gh/ezyang/3105/head -> origin/gh/ezyang/3105/head 2025-08-14T21:22:30.1690552Z * [new branch] gh/ezyang/3105/orig -> origin/gh/ezyang/3105/orig 2025-08-14T21:22:30.1692945Z * [new branch] gh/ezyang/3106/base -> origin/gh/ezyang/3106/base 2025-08-14T21:22:30.1693815Z * [new branch] gh/ezyang/3106/head -> origin/gh/ezyang/3106/head 2025-08-14T21:22:30.1694877Z * [new branch] gh/ezyang/3106/orig -> origin/gh/ezyang/3106/orig 2025-08-14T21:22:30.1696183Z * [new branch] gh/ezyang/3107/base -> origin/gh/ezyang/3107/base 2025-08-14T21:22:30.1705320Z * [new branch] gh/ezyang/3107/head -> origin/gh/ezyang/3107/head 2025-08-14T21:22:30.1705827Z * [new branch] gh/ezyang/3107/orig -> origin/gh/ezyang/3107/orig 2025-08-14T21:22:30.1706337Z * [new branch] gh/ezyang/3108/base -> origin/gh/ezyang/3108/base 2025-08-14T21:22:30.1706861Z * [new branch] gh/ezyang/3108/head -> origin/gh/ezyang/3108/head 2025-08-14T21:22:30.1707367Z * [new branch] gh/ezyang/3108/orig -> origin/gh/ezyang/3108/orig 2025-08-14T21:22:30.1707760Z * [new branch] gh/ezyang/3109/base -> origin/gh/ezyang/3109/base 2025-08-14T21:22:30.1708148Z * [new branch] gh/ezyang/3109/head -> origin/gh/ezyang/3109/head 2025-08-14T21:22:30.1709053Z * [new branch] gh/ezyang/3109/orig -> origin/gh/ezyang/3109/orig 2025-08-14T21:22:30.1710490Z * [new branch] gh/ezyang/3110/base -> origin/gh/ezyang/3110/base 2025-08-14T21:22:30.1719631Z * [new branch] gh/ezyang/3110/head -> origin/gh/ezyang/3110/head 2025-08-14T21:22:30.1720151Z * [new branch] gh/ezyang/3110/orig -> origin/gh/ezyang/3110/orig 2025-08-14T21:22:30.1720667Z * [new branch] gh/ezyang/3111/base -> origin/gh/ezyang/3111/base 2025-08-14T21:22:30.1721248Z * [new branch] gh/ezyang/3111/head -> origin/gh/ezyang/3111/head 2025-08-14T21:22:30.1721751Z * [new branch] gh/ezyang/3111/orig -> origin/gh/ezyang/3111/orig 2025-08-14T21:22:30.1722258Z * [new branch] gh/ezyang/3112/base -> origin/gh/ezyang/3112/base 2025-08-14T21:22:30.1722726Z * [new branch] gh/ezyang/3112/head -> origin/gh/ezyang/3112/head 2025-08-14T21:22:30.1723107Z * [new branch] gh/ezyang/3112/orig -> origin/gh/ezyang/3112/orig 2025-08-14T21:22:30.1723494Z * [new branch] gh/ezyang/3113/base -> origin/gh/ezyang/3113/base 2025-08-14T21:22:30.1723885Z * [new branch] gh/ezyang/3113/head -> origin/gh/ezyang/3113/head 2025-08-14T21:22:30.1724268Z * [new branch] gh/ezyang/3113/orig -> origin/gh/ezyang/3113/orig 2025-08-14T21:22:30.1724662Z * [new branch] gh/ezyang/3114/base -> origin/gh/ezyang/3114/base 2025-08-14T21:22:30.1725047Z * [new branch] gh/ezyang/3114/head -> origin/gh/ezyang/3114/head 2025-08-14T21:22:30.1727643Z * [new branch] gh/ezyang/3114/orig -> origin/gh/ezyang/3114/orig 2025-08-14T21:22:30.1731257Z * [new branch] gh/ezyang/3115/base -> origin/gh/ezyang/3115/base 2025-08-14T21:22:30.1732093Z * [new branch] gh/ezyang/3115/head -> origin/gh/ezyang/3115/head 2025-08-14T21:22:30.1733158Z * [new branch] gh/ezyang/3115/orig -> origin/gh/ezyang/3115/orig 2025-08-14T21:22:30.1734426Z * [new branch] gh/ezyang/3116/base -> origin/gh/ezyang/3116/base 2025-08-14T21:22:30.1735301Z * [new branch] gh/ezyang/3116/head -> origin/gh/ezyang/3116/head 2025-08-14T21:22:30.1736206Z * [new branch] gh/ezyang/3116/orig -> origin/gh/ezyang/3116/orig 2025-08-14T21:22:30.1737557Z * [new branch] gh/ezyang/3117/base -> origin/gh/ezyang/3117/base 2025-08-14T21:22:30.1738386Z * [new branch] gh/ezyang/3117/head -> origin/gh/ezyang/3117/head 2025-08-14T21:22:30.1739301Z * [new branch] gh/ezyang/3117/orig -> origin/gh/ezyang/3117/orig 2025-08-14T21:22:30.1744593Z * [new branch] gh/ezyang/3118/base -> origin/gh/ezyang/3118/base 2025-08-14T21:22:30.1744989Z * [new branch] gh/ezyang/3118/head -> origin/gh/ezyang/3118/head 2025-08-14T21:22:30.1745375Z * [new branch] gh/ezyang/3118/orig -> origin/gh/ezyang/3118/orig 2025-08-14T21:22:30.1745771Z * [new branch] gh/ezyang/3119/base -> origin/gh/ezyang/3119/base 2025-08-14T21:22:30.1746148Z * [new branch] gh/ezyang/3119/head -> origin/gh/ezyang/3119/head 2025-08-14T21:22:30.1746545Z * [new branch] gh/ezyang/3119/orig -> origin/gh/ezyang/3119/orig 2025-08-14T21:22:30.1747134Z * [new branch] gh/ezyang/3120/base -> origin/gh/ezyang/3120/base 2025-08-14T21:22:30.1748064Z * [new branch] gh/ezyang/3120/head -> origin/gh/ezyang/3120/head 2025-08-14T21:22:30.1749354Z * [new branch] gh/ezyang/3120/orig -> origin/gh/ezyang/3120/orig 2025-08-14T21:22:30.1750755Z * [new branch] gh/ezyang/3121/base -> origin/gh/ezyang/3121/base 2025-08-14T21:22:30.1751614Z * [new branch] gh/ezyang/3121/head -> origin/gh/ezyang/3121/head 2025-08-14T21:22:30.1752659Z * [new branch] gh/ezyang/3121/orig -> origin/gh/ezyang/3121/orig 2025-08-14T21:22:30.1753962Z * [new branch] gh/ezyang/3122/base -> origin/gh/ezyang/3122/base 2025-08-14T21:22:30.1763157Z * [new branch] gh/ezyang/3122/head -> origin/gh/ezyang/3122/head 2025-08-14T21:22:30.1763683Z * [new branch] gh/ezyang/3122/orig -> origin/gh/ezyang/3122/orig 2025-08-14T21:22:30.1764190Z * [new branch] gh/ezyang/3123/base -> origin/gh/ezyang/3123/base 2025-08-14T21:22:30.1764687Z * [new branch] gh/ezyang/3123/head -> origin/gh/ezyang/3123/head 2025-08-14T21:22:30.1765088Z * [new branch] gh/ezyang/3123/orig -> origin/gh/ezyang/3123/orig 2025-08-14T21:22:30.1765477Z * [new branch] gh/ezyang/3124/base -> origin/gh/ezyang/3124/base 2025-08-14T21:22:30.1765872Z * [new branch] gh/ezyang/3124/head -> origin/gh/ezyang/3124/head 2025-08-14T21:22:30.1766446Z * [new branch] gh/ezyang/3124/orig -> origin/gh/ezyang/3124/orig 2025-08-14T21:22:30.1767702Z * [new branch] gh/ezyang/3125/base -> origin/gh/ezyang/3125/base 2025-08-14T21:22:30.1768577Z * [new branch] gh/ezyang/3125/head -> origin/gh/ezyang/3125/head 2025-08-14T21:22:30.1773551Z * [new branch] gh/ezyang/3125/orig -> origin/gh/ezyang/3125/orig 2025-08-14T21:22:30.1773950Z * [new branch] gh/ezyang/3126/base -> origin/gh/ezyang/3126/base 2025-08-14T21:22:30.1774325Z * [new branch] gh/ezyang/3126/head -> origin/gh/ezyang/3126/head 2025-08-14T21:22:30.1774705Z * [new branch] gh/ezyang/3126/orig -> origin/gh/ezyang/3126/orig 2025-08-14T21:22:30.1775094Z * [new branch] gh/ezyang/3127/base -> origin/gh/ezyang/3127/base 2025-08-14T21:22:30.1775472Z * [new branch] gh/ezyang/3127/head -> origin/gh/ezyang/3127/head 2025-08-14T21:22:30.1776257Z * [new branch] gh/ezyang/3127/orig -> origin/gh/ezyang/3127/orig 2025-08-14T21:22:30.1777650Z * [new branch] gh/ezyang/3128/base -> origin/gh/ezyang/3128/base 2025-08-14T21:22:30.1778607Z * [new branch] gh/ezyang/3128/head -> origin/gh/ezyang/3128/head 2025-08-14T21:22:30.1779480Z * [new branch] gh/ezyang/3128/orig -> origin/gh/ezyang/3128/orig 2025-08-14T21:22:30.1780795Z * [new branch] gh/ezyang/3129/base -> origin/gh/ezyang/3129/base 2025-08-14T21:22:30.1781700Z * [new branch] gh/ezyang/3129/head -> origin/gh/ezyang/3129/head 2025-08-14T21:22:30.1782595Z * [new branch] gh/ezyang/3129/orig -> origin/gh/ezyang/3129/orig 2025-08-14T21:22:30.1788837Z * [new branch] gh/ezyang/3130/base -> origin/gh/ezyang/3130/base 2025-08-14T21:22:30.1789711Z * [new branch] gh/ezyang/3130/head -> origin/gh/ezyang/3130/head 2025-08-14T21:22:30.1790634Z * [new branch] gh/ezyang/3130/orig -> origin/gh/ezyang/3130/orig 2025-08-14T21:22:30.1791968Z * [new branch] gh/ezyang/3131/base -> origin/gh/ezyang/3131/base 2025-08-14T21:22:30.1792848Z * [new branch] gh/ezyang/3131/head -> origin/gh/ezyang/3131/head 2025-08-14T21:22:30.1793785Z * [new branch] gh/ezyang/3131/orig -> origin/gh/ezyang/3131/orig 2025-08-14T21:22:30.1795093Z * [new branch] gh/ezyang/3132/base -> origin/gh/ezyang/3132/base 2025-08-14T21:22:30.1795940Z * [new branch] gh/ezyang/3132/head -> origin/gh/ezyang/3132/head 2025-08-14T21:22:30.1796872Z * [new branch] gh/ezyang/3132/orig -> origin/gh/ezyang/3132/orig 2025-08-14T21:22:30.1798190Z * [new branch] gh/ezyang/3133/base -> origin/gh/ezyang/3133/base 2025-08-14T21:22:30.1802895Z * [new branch] gh/ezyang/3133/head -> origin/gh/ezyang/3133/head 2025-08-14T21:22:30.1803511Z * [new branch] gh/ezyang/3133/orig -> origin/gh/ezyang/3133/orig 2025-08-14T21:22:30.1803910Z * [new branch] gh/ezyang/3134/base -> origin/gh/ezyang/3134/base 2025-08-14T21:22:30.1804304Z * [new branch] gh/ezyang/3134/head -> origin/gh/ezyang/3134/head 2025-08-14T21:22:30.1804687Z * [new branch] gh/ezyang/3134/orig -> origin/gh/ezyang/3134/orig 2025-08-14T21:22:30.1805079Z * [new branch] gh/ezyang/3135/base -> origin/gh/ezyang/3135/base 2025-08-14T21:22:30.1805634Z * [new branch] gh/ezyang/3135/head -> origin/gh/ezyang/3135/head 2025-08-14T21:22:30.1806588Z * [new branch] gh/ezyang/3135/orig -> origin/gh/ezyang/3135/orig 2025-08-14T21:22:30.1807809Z * [new branch] gh/ezyang/3136/base -> origin/gh/ezyang/3136/base 2025-08-14T21:22:30.1808654Z * [new branch] gh/ezyang/3136/head -> origin/gh/ezyang/3136/head 2025-08-14T21:22:30.1809564Z * [new branch] gh/ezyang/3136/orig -> origin/gh/ezyang/3136/orig 2025-08-14T21:22:30.1811089Z * [new branch] gh/fadara01/1/base -> origin/gh/fadara01/1/base 2025-08-14T21:22:30.1811935Z * [new branch] gh/fadara01/1/head -> origin/gh/fadara01/1/head 2025-08-14T21:22:30.1813125Z * [new branch] gh/fadara01/1/orig -> origin/gh/fadara01/1/orig 2025-08-14T21:22:30.1823291Z * [new branch] gh/fduwjj/168/base -> origin/gh/fduwjj/168/base 2025-08-14T21:22:30.1824258Z * [new branch] gh/fduwjj/168/head -> origin/gh/fduwjj/168/head 2025-08-14T21:22:30.1825215Z * [new branch] gh/fduwjj/168/orig -> origin/gh/fduwjj/168/orig 2025-08-14T21:22:30.1826678Z * [new branch] gh/fduwjj/169/base -> origin/gh/fduwjj/169/base 2025-08-14T21:22:30.1835803Z * [new branch] gh/fduwjj/169/head -> origin/gh/fduwjj/169/head 2025-08-14T21:22:30.1836381Z * [new branch] gh/fduwjj/169/orig -> origin/gh/fduwjj/169/orig 2025-08-14T21:22:30.1836875Z * [new branch] gh/fduwjj/170/base -> origin/gh/fduwjj/170/base 2025-08-14T21:22:30.1837374Z * [new branch] gh/fduwjj/170/head -> origin/gh/fduwjj/170/head 2025-08-14T21:22:30.1837867Z * [new branch] gh/fduwjj/170/orig -> origin/gh/fduwjj/170/orig 2025-08-14T21:22:30.1838251Z * [new branch] gh/fduwjj/171/base -> origin/gh/fduwjj/171/base 2025-08-14T21:22:30.1838634Z * [new branch] gh/fduwjj/171/head -> origin/gh/fduwjj/171/head 2025-08-14T21:22:30.1839010Z * [new branch] gh/fduwjj/171/orig -> origin/gh/fduwjj/171/orig 2025-08-14T21:22:30.1839383Z * [new branch] gh/fduwjj/172/base -> origin/gh/fduwjj/172/base 2025-08-14T21:22:30.1839962Z * [new branch] gh/fduwjj/172/head -> origin/gh/fduwjj/172/head 2025-08-14T21:22:30.1840853Z * [new branch] gh/fduwjj/172/orig -> origin/gh/fduwjj/172/orig 2025-08-14T21:22:30.1845953Z * [new branch] gh/fduwjj/173/base -> origin/gh/fduwjj/173/base 2025-08-14T21:22:30.1846341Z * [new branch] gh/fduwjj/173/head -> origin/gh/fduwjj/173/head 2025-08-14T21:22:30.1846728Z * [new branch] gh/fduwjj/173/orig -> origin/gh/fduwjj/173/orig 2025-08-14T21:22:30.1847110Z * [new branch] gh/fduwjj/174/base -> origin/gh/fduwjj/174/base 2025-08-14T21:22:30.1847493Z * [new branch] gh/fduwjj/174/head -> origin/gh/fduwjj/174/head 2025-08-14T21:22:30.1847866Z * [new branch] gh/fduwjj/174/orig -> origin/gh/fduwjj/174/orig 2025-08-14T21:22:30.1849223Z * [new branch] gh/fduwjj/175/base -> origin/gh/fduwjj/175/base 2025-08-14T21:22:30.1850632Z * [new branch] gh/fduwjj/175/head -> origin/gh/fduwjj/175/head 2025-08-14T21:22:30.1851424Z * [new branch] gh/fduwjj/175/orig -> origin/gh/fduwjj/175/orig 2025-08-14T21:22:30.1852744Z * [new branch] gh/fduwjj/176/base -> origin/gh/fduwjj/176/base 2025-08-14T21:22:30.1853648Z * [new branch] gh/fduwjj/176/head -> origin/gh/fduwjj/176/head 2025-08-14T21:22:30.1854784Z * [new branch] gh/fduwjj/176/orig -> origin/gh/fduwjj/176/orig 2025-08-14T21:22:30.1855942Z * [new branch] gh/fduwjj/177/base -> origin/gh/fduwjj/177/base 2025-08-14T21:22:30.1861208Z * [new branch] gh/fduwjj/177/head -> origin/gh/fduwjj/177/head 2025-08-14T21:22:30.1862072Z * [new branch] gh/fduwjj/177/orig -> origin/gh/fduwjj/177/orig 2025-08-14T21:22:30.1863463Z * [new branch] gh/fduwjj/178/base -> origin/gh/fduwjj/178/base 2025-08-14T21:22:30.1864500Z * [new branch] gh/fduwjj/178/head -> origin/gh/fduwjj/178/head 2025-08-14T21:22:30.1865337Z * [new branch] gh/fduwjj/178/orig -> origin/gh/fduwjj/178/orig 2025-08-14T21:22:30.1866609Z * [new branch] gh/fduwjj/179/base -> origin/gh/fduwjj/179/base 2025-08-14T21:22:30.1867555Z * [new branch] gh/fduwjj/179/head -> origin/gh/fduwjj/179/head 2025-08-14T21:22:30.1868940Z * [new branch] gh/fduwjj/179/orig -> origin/gh/fduwjj/179/orig 2025-08-14T21:22:30.1870334Z * [new branch] gh/fduwjj/180/base -> origin/gh/fduwjj/180/base 2025-08-14T21:22:30.1879058Z * [new branch] gh/fduwjj/180/head -> origin/gh/fduwjj/180/head 2025-08-14T21:22:30.1879555Z * [new branch] gh/fduwjj/180/orig -> origin/gh/fduwjj/180/orig 2025-08-14T21:22:30.1880046Z * [new branch] gh/fduwjj/181/base -> origin/gh/fduwjj/181/base 2025-08-14T21:22:30.1880558Z * [new branch] gh/fduwjj/181/head -> origin/gh/fduwjj/181/head 2025-08-14T21:22:30.1881222Z * [new branch] gh/fduwjj/181/orig -> origin/gh/fduwjj/181/orig 2025-08-14T21:22:30.1881637Z * [new branch] gh/fegin/306/base -> origin/gh/fegin/306/base 2025-08-14T21:22:30.1882025Z * [new branch] gh/fegin/306/head -> origin/gh/fegin/306/head 2025-08-14T21:22:30.1882416Z * [new branch] gh/fegin/306/orig -> origin/gh/fegin/306/orig 2025-08-14T21:22:30.1882789Z * [new branch] gh/fegin/307/base -> origin/gh/fegin/307/base 2025-08-14T21:22:30.1883166Z * [new branch] gh/fegin/307/head -> origin/gh/fegin/307/head 2025-08-14T21:22:30.1883549Z * [new branch] gh/fegin/307/orig -> origin/gh/fegin/307/orig 2025-08-14T21:22:30.1884090Z * [new branch] gh/fffrog/114/base -> origin/gh/fffrog/114/base 2025-08-14T21:22:30.1884970Z * [new branch] gh/fffrog/114/head -> origin/gh/fffrog/114/head 2025-08-14T21:22:30.1921587Z * [new branch] gh/fffrog/114/orig -> origin/gh/fffrog/114/orig 2025-08-14T21:22:30.1922284Z * [new branch] gh/fffrog/117/base -> origin/gh/fffrog/117/base 2025-08-14T21:22:30.1922931Z * [new branch] gh/fffrog/117/head -> origin/gh/fffrog/117/head 2025-08-14T21:22:30.1923386Z * [new branch] gh/fffrog/117/orig -> origin/gh/fffrog/117/orig 2025-08-14T21:22:30.1923774Z * [new branch] gh/fffrog/119/base -> origin/gh/fffrog/119/base 2025-08-14T21:22:30.1924148Z * [new branch] gh/fffrog/119/head -> origin/gh/fffrog/119/head 2025-08-14T21:22:30.1924532Z * [new branch] gh/fffrog/119/orig -> origin/gh/fffrog/119/orig 2025-08-14T21:22:30.1924911Z * [new branch] gh/fffrog/120/base -> origin/gh/fffrog/120/base 2025-08-14T21:22:30.1925418Z * [new branch] gh/fffrog/120/head -> origin/gh/fffrog/120/head 2025-08-14T21:22:30.1925808Z * [new branch] gh/fffrog/120/orig -> origin/gh/fffrog/120/orig 2025-08-14T21:22:30.1926191Z * [new branch] gh/fffrog/121/base -> origin/gh/fffrog/121/base 2025-08-14T21:22:30.1926580Z * [new branch] gh/fffrog/121/head -> origin/gh/fffrog/121/head 2025-08-14T21:22:30.1926976Z * [new branch] gh/fffrog/121/orig -> origin/gh/fffrog/121/orig 2025-08-14T21:22:30.1927350Z * [new branch] gh/fffrog/122/base -> origin/gh/fffrog/122/base 2025-08-14T21:22:30.1927732Z * [new branch] gh/fffrog/122/head -> origin/gh/fffrog/122/head 2025-08-14T21:22:30.1928111Z * [new branch] gh/fffrog/122/orig -> origin/gh/fffrog/122/orig 2025-08-14T21:22:30.1928599Z * [new branch] gh/fffrog/123/base -> origin/gh/fffrog/123/base 2025-08-14T21:22:30.1929049Z * [new branch] gh/fffrog/123/head -> origin/gh/fffrog/123/head 2025-08-14T21:22:30.1929442Z * [new branch] gh/fffrog/123/orig -> origin/gh/fffrog/123/orig 2025-08-14T21:22:30.1929821Z * [new branch] gh/fffrog/124/base -> origin/gh/fffrog/124/base 2025-08-14T21:22:30.1930192Z * [new branch] gh/fffrog/124/head -> origin/gh/fffrog/124/head 2025-08-14T21:22:30.1930585Z * [new branch] gh/fffrog/124/orig -> origin/gh/fffrog/124/orig 2025-08-14T21:22:30.1930965Z * [new branch] gh/fffrog/125/base -> origin/gh/fffrog/125/base 2025-08-14T21:22:30.1931345Z * [new branch] gh/fffrog/125/head -> origin/gh/fffrog/125/head 2025-08-14T21:22:30.1931721Z * [new branch] gh/fffrog/125/orig -> origin/gh/fffrog/125/orig 2025-08-14T21:22:30.1932103Z * [new branch] gh/fffrog/126/base -> origin/gh/fffrog/126/base 2025-08-14T21:22:30.1932538Z * [new branch] gh/fffrog/126/head -> origin/gh/fffrog/126/head 2025-08-14T21:22:30.1932911Z * [new branch] gh/fffrog/126/orig -> origin/gh/fffrog/126/orig 2025-08-14T21:22:30.1933290Z * [new branch] gh/fffrog/127/base -> origin/gh/fffrog/127/base 2025-08-14T21:22:30.1933674Z * [new branch] gh/fffrog/127/head -> origin/gh/fffrog/127/head 2025-08-14T21:22:30.1934058Z * [new branch] gh/fffrog/127/orig -> origin/gh/fffrog/127/orig 2025-08-14T21:22:30.1934434Z * [new branch] gh/fffrog/128/base -> origin/gh/fffrog/128/base 2025-08-14T21:22:30.1934813Z * [new branch] gh/fffrog/128/head -> origin/gh/fffrog/128/head 2025-08-14T21:22:30.1935345Z * [new branch] gh/fffrog/128/orig -> origin/gh/fffrog/128/orig 2025-08-14T21:22:30.1935729Z * [new branch] gh/fffrog/129/base -> origin/gh/fffrog/129/base 2025-08-14T21:22:30.1936107Z * [new branch] gh/fffrog/129/head -> origin/gh/fffrog/129/head 2025-08-14T21:22:30.1936488Z * [new branch] gh/fffrog/129/orig -> origin/gh/fffrog/129/orig 2025-08-14T21:22:30.1936870Z * [new branch] gh/fffrog/130/base -> origin/gh/fffrog/130/base 2025-08-14T21:22:30.1937247Z * [new branch] gh/fffrog/130/head -> origin/gh/fffrog/130/head 2025-08-14T21:22:30.1937615Z * [new branch] gh/fffrog/130/orig -> origin/gh/fffrog/130/orig 2025-08-14T21:22:30.1938055Z * [new branch] gh/fffrog/131/base -> origin/gh/fffrog/131/base 2025-08-14T21:22:30.1938792Z * [new branch] gh/fffrog/131/head -> origin/gh/fffrog/131/head 2025-08-14T21:22:30.1939676Z * [new branch] gh/fffrog/131/orig -> origin/gh/fffrog/131/orig 2025-08-14T21:22:30.1941117Z * [new branch] gh/fffrog/132/base -> origin/gh/fffrog/132/base 2025-08-14T21:22:30.1941924Z * [new branch] gh/fffrog/132/head -> origin/gh/fffrog/132/head 2025-08-14T21:22:30.1942890Z * [new branch] gh/fffrog/132/orig -> origin/gh/fffrog/132/orig 2025-08-14T21:22:30.1953739Z * [new branch] gh/fffrog/133/base -> origin/gh/fffrog/133/base 2025-08-14T21:22:30.1954464Z * [new branch] gh/fffrog/133/head -> origin/gh/fffrog/133/head 2025-08-14T21:22:30.1955401Z * [new branch] gh/fffrog/133/orig -> origin/gh/fffrog/133/orig 2025-08-14T21:22:30.1956835Z * [new branch] gh/fffrog/134/base -> origin/gh/fffrog/134/base 2025-08-14T21:22:30.1957759Z * [new branch] gh/fffrog/134/head -> origin/gh/fffrog/134/head 2025-08-14T21:22:30.1962153Z * [new branch] gh/fffrog/134/orig -> origin/gh/fffrog/134/orig 2025-08-14T21:22:30.1963993Z * [new branch] gh/fffrog/135/base -> origin/gh/fffrog/135/base 2025-08-14T21:22:30.1964381Z * [new branch] gh/fffrog/135/head -> origin/gh/fffrog/135/head 2025-08-14T21:22:30.1964769Z * [new branch] gh/fffrog/135/orig -> origin/gh/fffrog/135/orig 2025-08-14T21:22:30.1965149Z * [new branch] gh/fffrog/136/base -> origin/gh/fffrog/136/base 2025-08-14T21:22:30.1965537Z * [new branch] gh/fffrog/136/head -> origin/gh/fffrog/136/head 2025-08-14T21:22:30.1965913Z * [new branch] gh/fffrog/136/orig -> origin/gh/fffrog/136/orig 2025-08-14T21:22:30.1967039Z * [new branch] gh/fffrog/137/base -> origin/gh/fffrog/137/base 2025-08-14T21:22:30.1967880Z * [new branch] gh/fffrog/137/head -> origin/gh/fffrog/137/head 2025-08-14T21:22:30.1968778Z * [new branch] gh/fffrog/137/orig -> origin/gh/fffrog/137/orig 2025-08-14T21:22:30.1970103Z * [new branch] gh/fffrog/138/base -> origin/gh/fffrog/138/base 2025-08-14T21:22:30.1970931Z * [new branch] gh/fffrog/138/head -> origin/gh/fffrog/138/head 2025-08-14T21:22:30.1971838Z * [new branch] gh/fffrog/138/orig -> origin/gh/fffrog/138/orig 2025-08-14T21:22:30.1976586Z * [new branch] gh/gmagogsfm/1/base -> origin/gh/gmagogsfm/1/base 2025-08-14T21:22:30.1977083Z * [new branch] gh/gmagogsfm/1/head -> origin/gh/gmagogsfm/1/head 2025-08-14T21:22:30.1977519Z * [new branch] gh/gmagogsfm/1/orig -> origin/gh/gmagogsfm/1/orig 2025-08-14T21:22:30.1977945Z * [new branch] gh/gmagogsfm/2/base -> origin/gh/gmagogsfm/2/base 2025-08-14T21:22:30.1978371Z * [new branch] gh/gmagogsfm/2/head -> origin/gh/gmagogsfm/2/head 2025-08-14T21:22:30.1978992Z * [new branch] gh/gmagogsfm/2/orig -> origin/gh/gmagogsfm/2/orig 2025-08-14T21:22:30.1981241Z * [new branch] gh/gmagogsfm/3/base -> origin/gh/gmagogsfm/3/base 2025-08-14T21:22:30.1981702Z * [new branch] gh/gmagogsfm/3/head -> origin/gh/gmagogsfm/3/head 2025-08-14T21:22:30.1982181Z * [new branch] gh/gmagogsfm/3/orig -> origin/gh/gmagogsfm/3/orig 2025-08-14T21:22:30.1983291Z * [new branch] gh/gmagogsfm/4/base -> origin/gh/gmagogsfm/4/base 2025-08-14T21:22:30.1984139Z * [new branch] gh/gmagogsfm/4/head -> origin/gh/gmagogsfm/4/head 2025-08-14T21:22:30.1985047Z * [new branch] gh/gmagogsfm/4/orig -> origin/gh/gmagogsfm/4/orig 2025-08-14T21:22:30.1986619Z * [new branch] gh/guangyey/130/base -> origin/gh/guangyey/130/base 2025-08-14T21:22:30.1995375Z * [new branch] gh/guangyey/130/head -> origin/gh/guangyey/130/head 2025-08-14T21:22:30.1995900Z * [new branch] gh/guangyey/130/orig -> origin/gh/guangyey/130/orig 2025-08-14T21:22:30.1996623Z * [new branch] gh/guangyey/133/base -> origin/gh/guangyey/133/base 2025-08-14T21:22:30.1997244Z * [new branch] gh/guangyey/133/head -> origin/gh/guangyey/133/head 2025-08-14T21:22:30.1998099Z * [new branch] gh/guangyey/133/orig -> origin/gh/guangyey/133/orig 2025-08-14T21:22:30.1999361Z * [new branch] gh/guangyey/134/base -> origin/gh/guangyey/134/base 2025-08-14T21:22:30.2000250Z * [new branch] gh/guangyey/134/head -> origin/gh/guangyey/134/head 2025-08-14T21:22:30.2001279Z * [new branch] gh/guangyey/134/orig -> origin/gh/guangyey/134/orig 2025-08-14T21:22:30.2005658Z * [new branch] gh/guangyey/135/base -> origin/gh/guangyey/135/base 2025-08-14T21:22:30.2006226Z * [new branch] gh/guangyey/135/head -> origin/gh/guangyey/135/head 2025-08-14T21:22:30.2006627Z * [new branch] gh/guangyey/135/orig -> origin/gh/guangyey/135/orig 2025-08-14T21:22:30.2007049Z * [new branch] gh/guangyey/139/base -> origin/gh/guangyey/139/base 2025-08-14T21:22:30.2007472Z * [new branch] gh/guangyey/139/head -> origin/gh/guangyey/139/head 2025-08-14T21:22:30.2007899Z * [new branch] gh/guangyey/139/orig -> origin/gh/guangyey/139/orig 2025-08-14T21:22:30.2009758Z * [new branch] gh/guangyey/140/base -> origin/gh/guangyey/140/base 2025-08-14T21:22:30.2010298Z * [new branch] gh/guangyey/140/head -> origin/gh/guangyey/140/head 2025-08-14T21:22:30.2010810Z * [new branch] gh/guangyey/140/orig -> origin/gh/guangyey/140/orig 2025-08-14T21:22:30.2012080Z * [new branch] gh/guangyey/142/base -> origin/gh/guangyey/142/base 2025-08-14T21:22:30.2012934Z * [new branch] gh/guangyey/142/head -> origin/gh/guangyey/142/head 2025-08-14T21:22:30.2013835Z * [new branch] gh/guangyey/142/orig -> origin/gh/guangyey/142/orig 2025-08-14T21:22:30.2015085Z * [new branch] gh/guangyey/145/base -> origin/gh/guangyey/145/base 2025-08-14T21:22:30.2020260Z * [new branch] gh/guangyey/145/head -> origin/gh/guangyey/145/head 2025-08-14T21:22:30.2021312Z * [new branch] gh/guangyey/145/orig -> origin/gh/guangyey/145/orig 2025-08-14T21:22:30.2022568Z * [new branch] gh/guangyey/153/base -> origin/gh/guangyey/153/base 2025-08-14T21:22:30.2023487Z * [new branch] gh/guangyey/153/head -> origin/gh/guangyey/153/head 2025-08-14T21:22:30.2024383Z * [new branch] gh/guangyey/153/orig -> origin/gh/guangyey/153/orig 2025-08-14T21:22:30.2025690Z * [new branch] gh/guangyey/158/base -> origin/gh/guangyey/158/base 2025-08-14T21:22:30.2026539Z * [new branch] gh/guangyey/158/head -> origin/gh/guangyey/158/head 2025-08-14T21:22:30.2027459Z * [new branch] gh/guangyey/158/orig -> origin/gh/guangyey/158/orig 2025-08-14T21:22:30.2028718Z * [new branch] gh/guangyey/159/base -> origin/gh/guangyey/159/base 2025-08-14T21:22:30.2029583Z * [new branch] gh/guangyey/159/head -> origin/gh/guangyey/159/head 2025-08-14T21:22:30.2034542Z * [new branch] gh/guangyey/159/orig -> origin/gh/guangyey/159/orig 2025-08-14T21:22:30.2034940Z * [new branch] gh/guangyey/163/base -> origin/gh/guangyey/163/base 2025-08-14T21:22:30.2035330Z * [new branch] gh/guangyey/163/head -> origin/gh/guangyey/163/head 2025-08-14T21:22:30.2035793Z * [new branch] gh/guangyey/163/orig -> origin/gh/guangyey/163/orig 2025-08-14T21:22:30.2036229Z * [new branch] gh/guangyey/165/base -> origin/gh/guangyey/165/base 2025-08-14T21:22:30.2036632Z * [new branch] gh/guangyey/165/head -> origin/gh/guangyey/165/head 2025-08-14T21:22:30.2037586Z * [new branch] gh/guangyey/165/orig -> origin/gh/guangyey/165/orig 2025-08-14T21:22:30.2039266Z * [new branch] gh/guangyey/168/base -> origin/gh/guangyey/168/base 2025-08-14T21:22:30.2040172Z * [new branch] gh/guangyey/168/head -> origin/gh/guangyey/168/head 2025-08-14T21:22:30.2041161Z * [new branch] gh/guangyey/168/orig -> origin/gh/guangyey/168/orig 2025-08-14T21:22:30.2042518Z * [new branch] gh/guangyey/169/base -> origin/gh/guangyey/169/base 2025-08-14T21:22:30.2043389Z * [new branch] gh/guangyey/169/head -> origin/gh/guangyey/169/head 2025-08-14T21:22:30.2044320Z * [new branch] gh/guangyey/169/orig -> origin/gh/guangyey/169/orig 2025-08-14T21:22:30.2053676Z * [new branch] gh/guangyey/170/base -> origin/gh/guangyey/170/base 2025-08-14T21:22:30.2054086Z * [new branch] gh/guangyey/170/head -> origin/gh/guangyey/170/head 2025-08-14T21:22:30.2054489Z * [new branch] gh/guangyey/170/orig -> origin/gh/guangyey/170/orig 2025-08-14T21:22:30.2054885Z * [new branch] gh/guangyey/171/base -> origin/gh/guangyey/171/base 2025-08-14T21:22:30.2055270Z * [new branch] gh/guangyey/171/head -> origin/gh/guangyey/171/head 2025-08-14T21:22:30.2055662Z * [new branch] gh/guangyey/171/orig -> origin/gh/guangyey/171/orig 2025-08-14T21:22:30.2056686Z * [new branch] gh/guangyey/172/base -> origin/gh/guangyey/172/base 2025-08-14T21:22:30.2057525Z * [new branch] gh/guangyey/172/head -> origin/gh/guangyey/172/head 2025-08-14T21:22:30.2058425Z * [new branch] gh/guangyey/172/orig -> origin/gh/guangyey/172/orig 2025-08-14T21:22:30.2063530Z * [new branch] gh/guangyey/173/base -> origin/gh/guangyey/173/base 2025-08-14T21:22:30.2063976Z * [new branch] gh/guangyey/173/head -> origin/gh/guangyey/173/head 2025-08-14T21:22:30.2064507Z * [new branch] gh/guangyey/173/orig -> origin/gh/guangyey/173/orig 2025-08-14T21:22:30.2064922Z * [new branch] gh/guangyey/174/base -> origin/gh/guangyey/174/base 2025-08-14T21:22:30.2065348Z * [new branch] gh/guangyey/174/head -> origin/gh/guangyey/174/head 2025-08-14T21:22:30.2065769Z * [new branch] gh/guangyey/174/orig -> origin/gh/guangyey/174/orig 2025-08-14T21:22:30.2066286Z * [new branch] gh/guangyey/175/base -> origin/gh/guangyey/175/base 2025-08-14T21:22:30.2067175Z * [new branch] gh/guangyey/175/head -> origin/gh/guangyey/175/head 2025-08-14T21:22:30.2069136Z * [new branch] gh/guangyey/175/orig -> origin/gh/guangyey/175/orig 2025-08-14T21:22:30.2069529Z * [new branch] gh/guangyey/176/base -> origin/gh/guangyey/176/base 2025-08-14T21:22:30.2070806Z * [new branch] gh/guangyey/176/head -> origin/gh/guangyey/176/head 2025-08-14T21:22:30.2071697Z * [new branch] gh/guangyey/176/orig -> origin/gh/guangyey/176/orig 2025-08-14T21:22:30.2073217Z * [new branch] gh/guangyey/177/base -> origin/gh/guangyey/177/base 2025-08-14T21:22:30.2077884Z * [new branch] gh/guangyey/177/head -> origin/gh/guangyey/177/head 2025-08-14T21:22:30.2079674Z * [new branch] gh/guangyey/177/orig -> origin/gh/guangyey/177/orig 2025-08-14T21:22:30.2081003Z * [new branch] gh/guangyey/178/base -> origin/gh/guangyey/178/base 2025-08-14T21:22:30.2081999Z * [new branch] gh/guangyey/178/head -> origin/gh/guangyey/178/head 2025-08-14T21:22:30.2082969Z * [new branch] gh/guangyey/178/orig -> origin/gh/guangyey/178/orig 2025-08-14T21:22:30.2084228Z * [new branch] gh/guangyey/179/base -> origin/gh/guangyey/179/base 2025-08-14T21:22:30.2085150Z * [new branch] gh/guangyey/179/head -> origin/gh/guangyey/179/head 2025-08-14T21:22:30.2095004Z * [new branch] gh/guangyey/179/orig -> origin/gh/guangyey/179/orig 2025-08-14T21:22:30.2095516Z * [new branch] gh/guangyey/180/base -> origin/gh/guangyey/180/base 2025-08-14T21:22:30.2096031Z * [new branch] gh/guangyey/180/head -> origin/gh/guangyey/180/head 2025-08-14T21:22:30.2096439Z * [new branch] gh/guangyey/180/orig -> origin/gh/guangyey/180/orig 2025-08-14T21:22:30.2096826Z * [new branch] gh/guangyey/181/base -> origin/gh/guangyey/181/base 2025-08-14T21:22:30.2097211Z * [new branch] gh/guangyey/181/head -> origin/gh/guangyey/181/head 2025-08-14T21:22:30.2097602Z * [new branch] gh/guangyey/181/orig -> origin/gh/guangyey/181/orig 2025-08-14T21:22:30.2097993Z * [new branch] gh/guangyey/182/base -> origin/gh/guangyey/182/base 2025-08-14T21:22:30.2098383Z * [new branch] gh/guangyey/182/head -> origin/gh/guangyey/182/head 2025-08-14T21:22:30.2098778Z * [new branch] gh/guangyey/182/orig -> origin/gh/guangyey/182/orig 2025-08-14T21:22:30.2099164Z * [new branch] gh/guangyey/183/base -> origin/gh/guangyey/183/base 2025-08-14T21:22:30.2099554Z * [new branch] gh/guangyey/183/head -> origin/gh/guangyey/183/head 2025-08-14T21:22:30.2099941Z * [new branch] gh/guangyey/183/orig -> origin/gh/guangyey/183/orig 2025-08-14T21:22:30.2100331Z * [new branch] gh/guangyey/184/base -> origin/gh/guangyey/184/base 2025-08-14T21:22:30.2101148Z * [new branch] gh/guangyey/184/head -> origin/gh/guangyey/184/head 2025-08-14T21:22:30.2102056Z * [new branch] gh/guangyey/184/orig -> origin/gh/guangyey/184/orig 2025-08-14T21:22:30.2111937Z * [new branch] gh/guangyey/185/base -> origin/gh/guangyey/185/base 2025-08-14T21:22:30.2112865Z * [new branch] gh/guangyey/185/head -> origin/gh/guangyey/185/head 2025-08-14T21:22:30.2113805Z * [new branch] gh/guangyey/185/orig -> origin/gh/guangyey/185/orig 2025-08-14T21:22:30.2115488Z * [new branch] gh/guangyey/79/base -> origin/gh/guangyey/79/base 2025-08-14T21:22:30.2116363Z * [new branch] gh/guangyey/79/head -> origin/gh/guangyey/79/head 2025-08-14T21:22:30.2121472Z * [new branch] gh/guangyey/79/orig -> origin/gh/guangyey/79/orig 2025-08-14T21:22:30.2121870Z * [new branch] gh/guangyey/89/base -> origin/gh/guangyey/89/base 2025-08-14T21:22:30.2122266Z * [new branch] gh/guangyey/89/head -> origin/gh/guangyey/89/head 2025-08-14T21:22:30.2122655Z * [new branch] gh/guangyey/89/orig -> origin/gh/guangyey/89/orig 2025-08-14T21:22:30.2123092Z * [new branch] gh/guilhermeleobas/107/base -> origin/gh/guilhermeleobas/107/base 2025-08-14T21:22:30.2123621Z * [new branch] gh/guilhermeleobas/107/head -> origin/gh/guilhermeleobas/107/head 2025-08-14T21:22:30.2124345Z * [new branch] gh/guilhermeleobas/107/orig -> origin/gh/guilhermeleobas/107/orig 2025-08-14T21:22:30.2125563Z * [new branch] gh/guilhermeleobas/108/base -> origin/gh/guilhermeleobas/108/base 2025-08-14T21:22:30.2126905Z * [new branch] gh/guilhermeleobas/108/head -> origin/gh/guilhermeleobas/108/head 2025-08-14T21:22:30.2127773Z * [new branch] gh/guilhermeleobas/108/orig -> origin/gh/guilhermeleobas/108/orig 2025-08-14T21:22:30.2129063Z * [new branch] gh/guilhermeleobas/124/base -> origin/gh/guilhermeleobas/124/base 2025-08-14T21:22:30.2130006Z * [new branch] gh/guilhermeleobas/124/head -> origin/gh/guilhermeleobas/124/head 2025-08-14T21:22:30.2131035Z * [new branch] gh/guilhermeleobas/124/orig -> origin/gh/guilhermeleobas/124/orig 2025-08-14T21:22:30.2135973Z * [new branch] gh/guilhermeleobas/147/base -> origin/gh/guilhermeleobas/147/base 2025-08-14T21:22:30.2136178Z * [new branch] gh/guilhermeleobas/147/head -> origin/gh/guilhermeleobas/147/head 2025-08-14T21:22:30.2136382Z * [new branch] gh/guilhermeleobas/147/orig -> origin/gh/guilhermeleobas/147/orig 2025-08-14T21:22:30.2136581Z * [new branch] gh/guilhermeleobas/150/base -> origin/gh/guilhermeleobas/150/base 2025-08-14T21:22:30.2136794Z * [new branch] gh/guilhermeleobas/150/head -> origin/gh/guilhermeleobas/150/head 2025-08-14T21:22:30.2140830Z * [new branch] gh/guilhermeleobas/150/orig -> origin/gh/guilhermeleobas/150/orig 2025-08-14T21:22:30.2141056Z * [new branch] gh/guilhermeleobas/163/base -> origin/gh/guilhermeleobas/163/base 2025-08-14T21:22:30.2141249Z * [new branch] gh/guilhermeleobas/163/head -> origin/gh/guilhermeleobas/163/head 2025-08-14T21:22:30.2141456Z * [new branch] gh/guilhermeleobas/163/orig -> origin/gh/guilhermeleobas/163/orig 2025-08-14T21:22:30.2141899Z * [new branch] gh/guilhermeleobas/164/base -> origin/gh/guilhermeleobas/164/base 2025-08-14T21:22:30.2142832Z * [new branch] gh/guilhermeleobas/164/head -> origin/gh/guilhermeleobas/164/head 2025-08-14T21:22:30.2143753Z * [new branch] gh/guilhermeleobas/164/orig -> origin/gh/guilhermeleobas/164/orig 2025-08-14T21:22:30.2144979Z * [new branch] gh/guilhermeleobas/165/base -> origin/gh/guilhermeleobas/165/base 2025-08-14T21:22:30.2145963Z * [new branch] gh/guilhermeleobas/165/head -> origin/gh/guilhermeleobas/165/head 2025-08-14T21:22:30.2155428Z * [new branch] gh/guilhermeleobas/165/orig -> origin/gh/guilhermeleobas/165/orig 2025-08-14T21:22:30.2155721Z * [new branch] gh/guilhermeleobas/166/base -> origin/gh/guilhermeleobas/166/base 2025-08-14T21:22:30.2155980Z * [new branch] gh/guilhermeleobas/166/head -> origin/gh/guilhermeleobas/166/head 2025-08-14T21:22:30.2156801Z * [new branch] gh/guilhermeleobas/166/orig -> origin/gh/guilhermeleobas/166/orig 2025-08-14T21:22:30.2158152Z * [new branch] gh/guilhermeleobas/167/base -> origin/gh/guilhermeleobas/167/base 2025-08-14T21:22:30.2159084Z * [new branch] gh/guilhermeleobas/167/head -> origin/gh/guilhermeleobas/167/head 2025-08-14T21:22:30.2160040Z * [new branch] gh/guilhermeleobas/167/orig -> origin/gh/guilhermeleobas/167/orig 2025-08-14T21:22:30.2162853Z * [new branch] gh/guilhermeleobas/168/base -> origin/gh/guilhermeleobas/168/base 2025-08-14T21:22:30.2163051Z * [new branch] gh/guilhermeleobas/168/head -> origin/gh/guilhermeleobas/168/head 2025-08-14T21:22:30.2163409Z * [new branch] gh/guilhermeleobas/168/orig -> origin/gh/guilhermeleobas/168/orig 2025-08-14T21:22:30.2165229Z * [new branch] gh/guilhermeleobas/169/base -> origin/gh/guilhermeleobas/169/base 2025-08-14T21:22:30.2165716Z * [new branch] gh/guilhermeleobas/169/head -> origin/gh/guilhermeleobas/169/head 2025-08-14T21:22:30.2166651Z * [new branch] gh/guilhermeleobas/169/orig -> origin/gh/guilhermeleobas/169/orig 2025-08-14T21:22:30.2167883Z * [new branch] gh/guilhermeleobas/170/base -> origin/gh/guilhermeleobas/170/base 2025-08-14T21:22:30.2168795Z * [new branch] gh/guilhermeleobas/170/head -> origin/gh/guilhermeleobas/170/head 2025-08-14T21:22:30.2169982Z * [new branch] gh/guilhermeleobas/170/orig -> origin/gh/guilhermeleobas/170/orig 2025-08-14T21:22:30.2171240Z * [new branch] gh/guilhermeleobas/171/base -> origin/gh/guilhermeleobas/171/base 2025-08-14T21:22:30.2172126Z * [new branch] gh/guilhermeleobas/171/head -> origin/gh/guilhermeleobas/171/head 2025-08-14T21:22:30.2173144Z * [new branch] gh/guilhermeleobas/171/orig -> origin/gh/guilhermeleobas/171/orig 2025-08-14T21:22:30.2174318Z * [new branch] gh/guilhermeleobas/173/base -> origin/gh/guilhermeleobas/173/base 2025-08-14T21:22:30.2175283Z * [new branch] gh/guilhermeleobas/173/head -> origin/gh/guilhermeleobas/173/head 2025-08-14T21:22:30.2180524Z * [new branch] gh/guilhermeleobas/173/orig -> origin/gh/guilhermeleobas/173/orig 2025-08-14T21:22:30.2181795Z * [new branch] gh/guilhermeleobas/181/base -> origin/gh/guilhermeleobas/181/base 2025-08-14T21:22:30.2183136Z * [new branch] gh/guilhermeleobas/181/head -> origin/gh/guilhermeleobas/181/head 2025-08-14T21:22:30.2183756Z * [new branch] gh/guilhermeleobas/181/orig -> origin/gh/guilhermeleobas/181/orig 2025-08-14T21:22:30.2185432Z * [new branch] gh/guilhermeleobas/182/base -> origin/gh/guilhermeleobas/182/base 2025-08-14T21:22:30.2186322Z * [new branch] gh/guilhermeleobas/182/head -> origin/gh/guilhermeleobas/182/head 2025-08-14T21:22:30.2187242Z * [new branch] gh/guilhermeleobas/182/orig -> origin/gh/guilhermeleobas/182/orig 2025-08-14T21:22:30.2188570Z * [new branch] gh/guilhermeleobas/183/base -> origin/gh/guilhermeleobas/183/base 2025-08-14T21:22:30.2189523Z * [new branch] gh/guilhermeleobas/183/head -> origin/gh/guilhermeleobas/183/head 2025-08-14T21:22:30.2194076Z * [new branch] gh/guilhermeleobas/183/orig -> origin/gh/guilhermeleobas/183/orig 2025-08-14T21:22:30.2194322Z * [new branch] gh/guilhermeleobas/184/base -> origin/gh/guilhermeleobas/184/base 2025-08-14T21:22:30.2194513Z * [new branch] gh/guilhermeleobas/184/head -> origin/gh/guilhermeleobas/184/head 2025-08-14T21:22:30.2194701Z * [new branch] gh/guilhermeleobas/184/orig -> origin/gh/guilhermeleobas/184/orig 2025-08-14T21:22:30.2195171Z * [new branch] gh/guilhermeleobas/185/base -> origin/gh/guilhermeleobas/185/base 2025-08-14T21:22:30.2196191Z * [new branch] gh/guilhermeleobas/185/head -> origin/gh/guilhermeleobas/185/head 2025-08-14T21:22:30.2197167Z * [new branch] gh/guilhermeleobas/185/orig -> origin/gh/guilhermeleobas/185/orig 2025-08-14T21:22:30.2198477Z * [new branch] gh/guilhermeleobas/188/base -> origin/gh/guilhermeleobas/188/base 2025-08-14T21:22:30.2199352Z * [new branch] gh/guilhermeleobas/188/head -> origin/gh/guilhermeleobas/188/head 2025-08-14T21:22:30.2200289Z * [new branch] gh/guilhermeleobas/188/orig -> origin/gh/guilhermeleobas/188/orig 2025-08-14T21:22:30.2201763Z * [new branch] gh/guilhermeleobas/189/base -> origin/gh/guilhermeleobas/189/base 2025-08-14T21:22:30.2202689Z * [new branch] gh/guilhermeleobas/189/head -> origin/gh/guilhermeleobas/189/head 2025-08-14T21:22:30.2203601Z * [new branch] gh/guilhermeleobas/189/orig -> origin/gh/guilhermeleobas/189/orig 2025-08-14T21:22:30.2212984Z * [new branch] gh/guilhermeleobas/190/base -> origin/gh/guilhermeleobas/190/base 2025-08-14T21:22:30.2213251Z * [new branch] gh/guilhermeleobas/190/head -> origin/gh/guilhermeleobas/190/head 2025-08-14T21:22:30.2213452Z * [new branch] gh/guilhermeleobas/190/orig -> origin/gh/guilhermeleobas/190/orig 2025-08-14T21:22:30.2213642Z * [new branch] gh/guilhermeleobas/192/base -> origin/gh/guilhermeleobas/192/base 2025-08-14T21:22:30.2213841Z * [new branch] gh/guilhermeleobas/192/head -> origin/gh/guilhermeleobas/192/head 2025-08-14T21:22:30.2214174Z * [new branch] gh/guilhermeleobas/192/orig -> origin/gh/guilhermeleobas/192/orig 2025-08-14T21:22:30.2215540Z * [new branch] gh/guilhermeleobas/193/base -> origin/gh/guilhermeleobas/193/base 2025-08-14T21:22:30.2216489Z * [new branch] gh/guilhermeleobas/193/head -> origin/gh/guilhermeleobas/193/head 2025-08-14T21:22:30.2217927Z * [new branch] gh/guilhermeleobas/193/orig -> origin/gh/guilhermeleobas/193/orig 2025-08-14T21:22:30.2223106Z * [new branch] gh/guilhermeleobas/194/base -> origin/gh/guilhermeleobas/194/base 2025-08-14T21:22:30.2223336Z * [new branch] gh/guilhermeleobas/194/head -> origin/gh/guilhermeleobas/194/head 2025-08-14T21:22:30.2223568Z * [new branch] gh/guilhermeleobas/194/orig -> origin/gh/guilhermeleobas/194/orig 2025-08-14T21:22:30.2223788Z * [new branch] gh/guilhermeleobas/203/base -> origin/gh/guilhermeleobas/203/base 2025-08-14T21:22:30.2223992Z * [new branch] gh/guilhermeleobas/203/head -> origin/gh/guilhermeleobas/203/head 2025-08-14T21:22:30.2224307Z * [new branch] gh/guilhermeleobas/203/orig -> origin/gh/guilhermeleobas/203/orig 2025-08-14T21:22:30.2227474Z * [new branch] gh/guilhermeleobas/204/base -> origin/gh/guilhermeleobas/204/base 2025-08-14T21:22:30.2227732Z * [new branch] gh/guilhermeleobas/204/head -> origin/gh/guilhermeleobas/204/head 2025-08-14T21:22:30.2228000Z * [new branch] gh/guilhermeleobas/204/orig -> origin/gh/guilhermeleobas/204/orig 2025-08-14T21:22:30.2229008Z * [new branch] gh/guilhermeleobas/205/base -> origin/gh/guilhermeleobas/205/base 2025-08-14T21:22:30.2229967Z * [new branch] gh/guilhermeleobas/205/head -> origin/gh/guilhermeleobas/205/head 2025-08-14T21:22:30.2230934Z * [new branch] gh/guilhermeleobas/205/orig -> origin/gh/guilhermeleobas/205/orig 2025-08-14T21:22:30.2232151Z * [new branch] gh/guilhermeleobas/206/base -> origin/gh/guilhermeleobas/206/base 2025-08-14T21:22:30.2233066Z * [new branch] gh/guilhermeleobas/206/head -> origin/gh/guilhermeleobas/206/head 2025-08-14T21:22:30.2238460Z * [new branch] gh/guilhermeleobas/206/orig -> origin/gh/guilhermeleobas/206/orig 2025-08-14T21:22:30.2239719Z * [new branch] gh/guilhermeleobas/207/base -> origin/gh/guilhermeleobas/207/base 2025-08-14T21:22:30.2240701Z * [new branch] gh/guilhermeleobas/207/head -> origin/gh/guilhermeleobas/207/head 2025-08-14T21:22:30.2241866Z * [new branch] gh/guilhermeleobas/207/orig -> origin/gh/guilhermeleobas/207/orig 2025-08-14T21:22:30.2243040Z * [new branch] gh/guilhermeleobas/208/base -> origin/gh/guilhermeleobas/208/base 2025-08-14T21:22:30.2244029Z * [new branch] gh/guilhermeleobas/208/head -> origin/gh/guilhermeleobas/208/head 2025-08-14T21:22:30.2244947Z * [new branch] gh/guilhermeleobas/208/orig -> origin/gh/guilhermeleobas/208/orig 2025-08-14T21:22:30.2246235Z * [new branch] gh/guilhermeleobas/209/base -> origin/gh/guilhermeleobas/209/base 2025-08-14T21:22:30.2247184Z * [new branch] gh/guilhermeleobas/209/head -> origin/gh/guilhermeleobas/209/head 2025-08-14T21:22:30.2252463Z * [new branch] gh/guilhermeleobas/209/orig -> origin/gh/guilhermeleobas/209/orig 2025-08-14T21:22:30.2252673Z * [new branch] gh/guilhermeleobas/210/base -> origin/gh/guilhermeleobas/210/base 2025-08-14T21:22:30.2252873Z * [new branch] gh/guilhermeleobas/210/head -> origin/gh/guilhermeleobas/210/head 2025-08-14T21:22:30.2253069Z * [new branch] gh/guilhermeleobas/210/orig -> origin/gh/guilhermeleobas/210/orig 2025-08-14T21:22:30.2253272Z * [new branch] gh/guilhermeleobas/211/base -> origin/gh/guilhermeleobas/211/base 2025-08-14T21:22:30.2253986Z * [new branch] gh/guilhermeleobas/211/head -> origin/gh/guilhermeleobas/211/head 2025-08-14T21:22:30.2254951Z * [new branch] gh/guilhermeleobas/211/orig -> origin/gh/guilhermeleobas/211/orig 2025-08-14T21:22:30.2256188Z * [new branch] gh/guilhermeleobas/212/base -> origin/gh/guilhermeleobas/212/base 2025-08-14T21:22:30.2257163Z * [new branch] gh/guilhermeleobas/212/head -> origin/gh/guilhermeleobas/212/head 2025-08-14T21:22:30.2258177Z * [new branch] gh/guilhermeleobas/212/orig -> origin/gh/guilhermeleobas/212/orig 2025-08-14T21:22:30.2259970Z * [new branch] gh/guilhermeleobas/213/base -> origin/gh/guilhermeleobas/213/base 2025-08-14T21:22:30.2260824Z * [new branch] gh/guilhermeleobas/213/head -> origin/gh/guilhermeleobas/213/head 2025-08-14T21:22:30.2261746Z * [new branch] gh/guilhermeleobas/213/orig -> origin/gh/guilhermeleobas/213/orig 2025-08-14T21:22:30.2271669Z * [new branch] gh/guilhermeleobas/214/base -> origin/gh/guilhermeleobas/214/base 2025-08-14T21:22:30.2272905Z * [new branch] gh/guilhermeleobas/214/head -> origin/gh/guilhermeleobas/214/head 2025-08-14T21:22:30.2273824Z * [new branch] gh/guilhermeleobas/214/orig -> origin/gh/guilhermeleobas/214/orig 2025-08-14T21:22:30.2275122Z * [new branch] gh/guilhermeleobas/215/base -> origin/gh/guilhermeleobas/215/base 2025-08-14T21:22:30.2276043Z * [new branch] gh/guilhermeleobas/215/head -> origin/gh/guilhermeleobas/215/head 2025-08-14T21:22:30.2285245Z * [new branch] gh/guilhermeleobas/215/orig -> origin/gh/guilhermeleobas/215/orig 2025-08-14T21:22:30.2285500Z * [new branch] gh/guilhermeleobas/216/base -> origin/gh/guilhermeleobas/216/base 2025-08-14T21:22:30.2285758Z * [new branch] gh/guilhermeleobas/216/head -> origin/gh/guilhermeleobas/216/head 2025-08-14T21:22:30.2286007Z * [new branch] gh/guilhermeleobas/216/orig -> origin/gh/guilhermeleobas/216/orig 2025-08-14T21:22:30.2286254Z * [new branch] gh/guilhermeleobas/217/base -> origin/gh/guilhermeleobas/217/base 2025-08-14T21:22:30.2286512Z * [new branch] gh/guilhermeleobas/217/head -> origin/gh/guilhermeleobas/217/head 2025-08-14T21:22:30.2286763Z * [new branch] gh/guilhermeleobas/217/orig -> origin/gh/guilhermeleobas/217/orig 2025-08-14T21:22:30.2287399Z * [new branch] gh/guilhermeleobas/218/base -> origin/gh/guilhermeleobas/218/base 2025-08-14T21:22:30.2288278Z * [new branch] gh/guilhermeleobas/218/head -> origin/gh/guilhermeleobas/218/head 2025-08-14T21:22:30.2289237Z * [new branch] gh/guilhermeleobas/218/orig -> origin/gh/guilhermeleobas/218/orig 2025-08-14T21:22:30.2290486Z * [new branch] gh/guilhermeleobas/219/base -> origin/gh/guilhermeleobas/219/base 2025-08-14T21:22:30.2295447Z * [new branch] gh/guilhermeleobas/219/head -> origin/gh/guilhermeleobas/219/head 2025-08-14T21:22:30.2295647Z * [new branch] gh/guilhermeleobas/219/orig -> origin/gh/guilhermeleobas/219/orig 2025-08-14T21:22:30.2295840Z * [new branch] gh/guilhermeleobas/220/base -> origin/gh/guilhermeleobas/220/base 2025-08-14T21:22:30.2296040Z * [new branch] gh/guilhermeleobas/220/head -> origin/gh/guilhermeleobas/220/head 2025-08-14T21:22:30.2296230Z * [new branch] gh/guilhermeleobas/220/orig -> origin/gh/guilhermeleobas/220/orig 2025-08-14T21:22:30.2297108Z * [new branch] gh/guilhermeleobas/221/base -> origin/gh/guilhermeleobas/221/base 2025-08-14T21:22:30.2298062Z * [new branch] gh/guilhermeleobas/221/head -> origin/gh/guilhermeleobas/221/head 2025-08-14T21:22:30.2299005Z * [new branch] gh/guilhermeleobas/221/orig -> origin/gh/guilhermeleobas/221/orig 2025-08-14T21:22:30.2300241Z * [new branch] gh/guilhermeleobas/222/base -> origin/gh/guilhermeleobas/222/base 2025-08-14T21:22:30.2301179Z * [new branch] gh/guilhermeleobas/222/head -> origin/gh/guilhermeleobas/222/head 2025-08-14T21:22:30.2302097Z * [new branch] gh/guilhermeleobas/222/orig -> origin/gh/guilhermeleobas/222/orig 2025-08-14T21:22:30.2303331Z * [new branch] gh/guilhermeleobas/223/base -> origin/gh/guilhermeleobas/223/base 2025-08-14T21:22:30.2304250Z * [new branch] gh/guilhermeleobas/223/head -> origin/gh/guilhermeleobas/223/head 2025-08-14T21:22:30.2305260Z * [new branch] gh/guilhermeleobas/223/orig -> origin/gh/guilhermeleobas/223/orig 2025-08-14T21:22:30.2311035Z * [new branch] gh/guilhermeleobas/224/base -> origin/gh/guilhermeleobas/224/base 2025-08-14T21:22:30.2314720Z * [new branch] gh/guilhermeleobas/224/head -> origin/gh/guilhermeleobas/224/head 2025-08-14T21:22:30.2314995Z * [new branch] gh/guilhermeleobas/224/orig -> origin/gh/guilhermeleobas/224/orig 2025-08-14T21:22:30.2315204Z * [new branch] gh/guilhermeleobas/225/base -> origin/gh/guilhermeleobas/225/base 2025-08-14T21:22:30.2315400Z * [new branch] gh/guilhermeleobas/225/head -> origin/gh/guilhermeleobas/225/head 2025-08-14T21:22:30.2316045Z * [new branch] gh/guilhermeleobas/225/orig -> origin/gh/guilhermeleobas/225/orig 2025-08-14T21:22:30.2317317Z * [new branch] gh/guilhermeleobas/226/base -> origin/gh/guilhermeleobas/226/base 2025-08-14T21:22:30.2318222Z * [new branch] gh/guilhermeleobas/226/head -> origin/gh/guilhermeleobas/226/head 2025-08-14T21:22:30.2319139Z * [new branch] gh/guilhermeleobas/226/orig -> origin/gh/guilhermeleobas/226/orig 2025-08-14T21:22:30.2326793Z * [new branch] gh/guilhermeleobas/227/base -> origin/gh/guilhermeleobas/227/base 2025-08-14T21:22:30.2327085Z * [new branch] gh/guilhermeleobas/227/head -> origin/gh/guilhermeleobas/227/head 2025-08-14T21:22:30.2327355Z * [new branch] gh/guilhermeleobas/227/orig -> origin/gh/guilhermeleobas/227/orig 2025-08-14T21:22:30.2327641Z * [new branch] gh/guilhermeleobas/228/base -> origin/gh/guilhermeleobas/228/base 2025-08-14T21:22:30.2327913Z * [new branch] gh/guilhermeleobas/228/head -> origin/gh/guilhermeleobas/228/head 2025-08-14T21:22:30.2328187Z * [new branch] gh/guilhermeleobas/228/orig -> origin/gh/guilhermeleobas/228/orig 2025-08-14T21:22:30.2328468Z * [new branch] gh/guilhermeleobas/229/base -> origin/gh/guilhermeleobas/229/base 2025-08-14T21:22:30.2328720Z * [new branch] gh/guilhermeleobas/229/head -> origin/gh/guilhermeleobas/229/head 2025-08-14T21:22:30.2329170Z * [new branch] gh/guilhermeleobas/229/orig -> origin/gh/guilhermeleobas/229/orig 2025-08-14T21:22:30.2330402Z * [new branch] gh/guilhermeleobas/230/base -> origin/gh/guilhermeleobas/230/base 2025-08-14T21:22:30.2331388Z * [new branch] gh/guilhermeleobas/230/head -> origin/gh/guilhermeleobas/230/head 2025-08-14T21:22:30.2332301Z * [new branch] gh/guilhermeleobas/230/orig -> origin/gh/guilhermeleobas/230/orig 2025-08-14T21:22:30.2333578Z * [new branch] gh/guilhermeleobas/231/base -> origin/gh/guilhermeleobas/231/base 2025-08-14T21:22:30.2336625Z * [new branch] gh/guilhermeleobas/231/head -> origin/gh/guilhermeleobas/231/head 2025-08-14T21:22:30.2339838Z * [new branch] gh/guilhermeleobas/231/orig -> origin/gh/guilhermeleobas/231/orig 2025-08-14T21:22:30.2341190Z * [new branch] gh/guilhermeleobas/232/base -> origin/gh/guilhermeleobas/232/base 2025-08-14T21:22:30.2342168Z * [new branch] gh/guilhermeleobas/232/head -> origin/gh/guilhermeleobas/232/head 2025-08-14T21:22:30.2343095Z * [new branch] gh/guilhermeleobas/232/orig -> origin/gh/guilhermeleobas/232/orig 2025-08-14T21:22:30.2344420Z * [new branch] gh/guilhermeleobas/233/base -> origin/gh/guilhermeleobas/233/base 2025-08-14T21:22:30.2345289Z * [new branch] gh/guilhermeleobas/233/head -> origin/gh/guilhermeleobas/233/head 2025-08-14T21:22:30.2346255Z * [new branch] gh/guilhermeleobas/233/orig -> origin/gh/guilhermeleobas/233/orig 2025-08-14T21:22:30.2347630Z * [new branch] gh/guilhermeleobas/73/base -> origin/gh/guilhermeleobas/73/base 2025-08-14T21:22:30.2348500Z * [new branch] gh/guilhermeleobas/73/head -> origin/gh/guilhermeleobas/73/head 2025-08-14T21:22:30.2353565Z * [new branch] gh/guilhermeleobas/73/orig -> origin/gh/guilhermeleobas/73/orig 2025-08-14T21:22:30.2353888Z * [new branch] gh/henrylhtsang/103/base -> origin/gh/henrylhtsang/103/base 2025-08-14T21:22:30.2354100Z * [new branch] gh/henrylhtsang/103/head -> origin/gh/henrylhtsang/103/head 2025-08-14T21:22:30.2354318Z * [new branch] gh/henrylhtsang/103/orig -> origin/gh/henrylhtsang/103/orig 2025-08-14T21:22:30.2355976Z * [new branch] gh/henrylhtsang/108/base -> origin/gh/henrylhtsang/108/base 2025-08-14T21:22:30.2356244Z * [new branch] gh/henrylhtsang/108/head -> origin/gh/henrylhtsang/108/head 2025-08-14T21:22:30.2357210Z * [new branch] gh/henrylhtsang/108/orig -> origin/gh/henrylhtsang/108/orig 2025-08-14T21:22:30.2358616Z * [new branch] gh/henrylhtsang/118/base -> origin/gh/henrylhtsang/118/base 2025-08-14T21:22:30.2359325Z * [new branch] gh/henrylhtsang/118/head -> origin/gh/henrylhtsang/118/head 2025-08-14T21:22:30.2360303Z * [new branch] gh/henrylhtsang/118/orig -> origin/gh/henrylhtsang/118/orig 2025-08-14T21:22:30.2361764Z * [new branch] gh/henrylhtsang/123/base -> origin/gh/henrylhtsang/123/base 2025-08-14T21:22:30.2362756Z * [new branch] gh/henrylhtsang/123/head -> origin/gh/henrylhtsang/123/head 2025-08-14T21:22:30.2372297Z * [new branch] gh/henrylhtsang/123/orig -> origin/gh/henrylhtsang/123/orig 2025-08-14T21:22:30.2372528Z * [new branch] gh/henrylhtsang/124/base -> origin/gh/henrylhtsang/124/base 2025-08-14T21:22:30.2372758Z * [new branch] gh/henrylhtsang/124/head -> origin/gh/henrylhtsang/124/head 2025-08-14T21:22:30.2372982Z * [new branch] gh/henrylhtsang/124/orig -> origin/gh/henrylhtsang/124/orig 2025-08-14T21:22:30.2373341Z * [new branch] gh/henrylhtsang/125/base -> origin/gh/henrylhtsang/125/base 2025-08-14T21:22:30.2373947Z * [new branch] gh/henrylhtsang/125/head -> origin/gh/henrylhtsang/125/head 2025-08-14T21:22:30.2374875Z * [new branch] gh/henrylhtsang/125/orig -> origin/gh/henrylhtsang/125/orig 2025-08-14T21:22:30.2376005Z * [new branch] gh/henrylhtsang/126/base -> origin/gh/henrylhtsang/126/base 2025-08-14T21:22:30.2377351Z * [new branch] gh/henrylhtsang/126/head -> origin/gh/henrylhtsang/126/head 2025-08-14T21:22:30.2378328Z * [new branch] gh/henrylhtsang/126/orig -> origin/gh/henrylhtsang/126/orig 2025-08-14T21:22:30.2382898Z * [new branch] gh/henrylhtsang/127/base -> origin/gh/henrylhtsang/127/base 2025-08-14T21:22:30.2383133Z * [new branch] gh/henrylhtsang/127/head -> origin/gh/henrylhtsang/127/head 2025-08-14T21:22:30.2383370Z * [new branch] gh/henrylhtsang/127/orig -> origin/gh/henrylhtsang/127/orig 2025-08-14T21:22:30.2383550Z * [new branch] gh/henrylhtsang/128/base -> origin/gh/henrylhtsang/128/base 2025-08-14T21:22:30.2383900Z * [new branch] gh/henrylhtsang/128/head -> origin/gh/henrylhtsang/128/head 2025-08-14T21:22:30.2384862Z * [new branch] gh/henrylhtsang/128/orig -> origin/gh/henrylhtsang/128/orig 2025-08-14T21:22:30.2386252Z * [new branch] gh/henrylhtsang/129/base -> origin/gh/henrylhtsang/129/base 2025-08-14T21:22:30.2387245Z * [new branch] gh/henrylhtsang/129/head -> origin/gh/henrylhtsang/129/head 2025-08-14T21:22:30.2388148Z * [new branch] gh/henrylhtsang/129/orig -> origin/gh/henrylhtsang/129/orig 2025-08-14T21:22:30.2389512Z * [new branch] gh/henrylhtsang/130/base -> origin/gh/henrylhtsang/130/base 2025-08-14T21:22:30.2390808Z * [new branch] gh/henrylhtsang/130/head -> origin/gh/henrylhtsang/130/head 2025-08-14T21:22:30.2391986Z * [new branch] gh/henrylhtsang/131/base -> origin/gh/henrylhtsang/131/base 2025-08-14T21:22:30.2393088Z * [new branch] gh/henrylhtsang/131/head -> origin/gh/henrylhtsang/131/head 2025-08-14T21:22:30.2398166Z * [new branch] gh/henrylhtsang/131/orig -> origin/gh/henrylhtsang/131/orig 2025-08-14T21:22:30.2399461Z * [new branch] gh/henrylhtsang/132/base -> origin/gh/henrylhtsang/132/base 2025-08-14T21:22:30.2400390Z * [new branch] gh/henrylhtsang/132/head -> origin/gh/henrylhtsang/132/head 2025-08-14T21:22:30.2401466Z * [new branch] gh/henrylhtsang/132/orig -> origin/gh/henrylhtsang/132/orig 2025-08-14T21:22:30.2402782Z * [new branch] gh/henrylhtsang/133/base -> origin/gh/henrylhtsang/133/base 2025-08-14T21:22:30.2403777Z * [new branch] gh/henrylhtsang/133/head -> origin/gh/henrylhtsang/133/head 2025-08-14T21:22:30.2404710Z * [new branch] gh/henrylhtsang/133/orig -> origin/gh/henrylhtsang/133/orig 2025-08-14T21:22:30.2406024Z * [new branch] gh/henrylhtsang/134/base -> origin/gh/henrylhtsang/134/base 2025-08-14T21:22:30.2407005Z * [new branch] gh/henrylhtsang/134/head -> origin/gh/henrylhtsang/134/head 2025-08-14T21:22:30.2411898Z * [new branch] gh/henrylhtsang/134/orig -> origin/gh/henrylhtsang/134/orig 2025-08-14T21:22:30.2412093Z * [new branch] gh/henrylhtsang/135/base -> origin/gh/henrylhtsang/135/base 2025-08-14T21:22:30.2412270Z * [new branch] gh/henrylhtsang/135/head -> origin/gh/henrylhtsang/135/head 2025-08-14T21:22:30.2412451Z * [new branch] gh/henrylhtsang/135/orig -> origin/gh/henrylhtsang/135/orig 2025-08-14T21:22:30.2412928Z * [new branch] gh/henrylhtsang/136/base -> origin/gh/henrylhtsang/136/base 2025-08-14T21:22:30.2413898Z * [new branch] gh/henrylhtsang/136/head -> origin/gh/henrylhtsang/136/head 2025-08-14T21:22:30.2414868Z * [new branch] gh/henrylhtsang/136/orig -> origin/gh/henrylhtsang/136/orig 2025-08-14T21:22:30.2416209Z * [new branch] gh/henrylhtsang/137/base -> origin/gh/henrylhtsang/137/base 2025-08-14T21:22:30.2417147Z * [new branch] gh/henrylhtsang/137/head -> origin/gh/henrylhtsang/137/head 2025-08-14T21:22:30.2417856Z * [new branch] gh/henrylhtsang/137/orig -> origin/gh/henrylhtsang/137/orig 2025-08-14T21:22:30.2419174Z * [new branch] gh/henrylhtsang/138/base -> origin/gh/henrylhtsang/138/base 2025-08-14T21:22:30.2420104Z * [new branch] gh/henrylhtsang/138/head -> origin/gh/henrylhtsang/138/head 2025-08-14T21:22:30.2421008Z * [new branch] gh/henrylhtsang/138/orig -> origin/gh/henrylhtsang/138/orig 2025-08-14T21:22:30.2431426Z * [new branch] gh/henrylhtsang/139/base -> origin/gh/henrylhtsang/139/base 2025-08-14T21:22:30.2432427Z * [new branch] gh/henrylhtsang/139/head -> origin/gh/henrylhtsang/139/head 2025-08-14T21:22:30.2433354Z * [new branch] gh/henrylhtsang/139/orig -> origin/gh/henrylhtsang/139/orig 2025-08-14T21:22:30.2434712Z * [new branch] gh/henrylhtsang/140/base -> origin/gh/henrylhtsang/140/base 2025-08-14T21:22:30.2435685Z * [new branch] gh/henrylhtsang/140/head -> origin/gh/henrylhtsang/140/head 2025-08-14T21:22:30.2444862Z * [new branch] gh/henrylhtsang/140/orig -> origin/gh/henrylhtsang/140/orig 2025-08-14T21:22:30.2445101Z * [new branch] gh/henrylhtsang/141/base -> origin/gh/henrylhtsang/141/base 2025-08-14T21:22:30.2445336Z * [new branch] gh/henrylhtsang/141/head -> origin/gh/henrylhtsang/141/head 2025-08-14T21:22:30.2445562Z * [new branch] gh/henrylhtsang/141/orig -> origin/gh/henrylhtsang/141/orig 2025-08-14T21:22:30.2445786Z * [new branch] gh/henrylhtsang/142/base -> origin/gh/henrylhtsang/142/base 2025-08-14T21:22:30.2446026Z * [new branch] gh/henrylhtsang/142/head -> origin/gh/henrylhtsang/142/head 2025-08-14T21:22:30.2446317Z * [new branch] gh/henrylhtsang/142/orig -> origin/gh/henrylhtsang/142/orig 2025-08-14T21:22:30.2447148Z * [new branch] gh/henrylhtsang/143/base -> origin/gh/henrylhtsang/143/base 2025-08-14T21:22:30.2448045Z * [new branch] gh/henrylhtsang/143/head -> origin/gh/henrylhtsang/143/head 2025-08-14T21:22:30.2449323Z * [new branch] gh/henrylhtsang/143/orig -> origin/gh/henrylhtsang/143/orig 2025-08-14T21:22:30.2452941Z * [new branch] gh/henrylhtsang/144/base -> origin/gh/henrylhtsang/144/base 2025-08-14T21:22:30.2453122Z * [new branch] gh/henrylhtsang/144/head -> origin/gh/henrylhtsang/144/head 2025-08-14T21:22:30.2453309Z * [new branch] gh/henrylhtsang/144/orig -> origin/gh/henrylhtsang/144/orig 2025-08-14T21:22:30.2454280Z * [new branch] gh/henrylhtsang/145/base -> origin/gh/henrylhtsang/145/base 2025-08-14T21:22:30.2455336Z * [new branch] gh/henrylhtsang/145/head -> origin/gh/henrylhtsang/145/head 2025-08-14T21:22:30.2456698Z * [new branch] gh/henrylhtsang/145/orig -> origin/gh/henrylhtsang/145/orig 2025-08-14T21:22:30.2458067Z * [new branch] gh/henrylhtsang/146/base -> origin/gh/henrylhtsang/146/base 2025-08-14T21:22:30.2459056Z * [new branch] gh/henrylhtsang/146/head -> origin/gh/henrylhtsang/146/head 2025-08-14T21:22:30.2459899Z * [new branch] gh/henrylhtsang/146/orig -> origin/gh/henrylhtsang/146/orig 2025-08-14T21:22:30.2461425Z * [new branch] gh/huydhn/1/head -> origin/gh/huydhn/1/head 2025-08-14T21:22:30.2462261Z * [new branch] gh/huydhn/1/next -> origin/gh/huydhn/1/next 2025-08-14T21:22:30.2463411Z * [new branch] gh/huydhn/2/head -> origin/gh/huydhn/2/head 2025-08-14T21:22:30.2464263Z * [new branch] gh/huydhn/2/next -> origin/gh/huydhn/2/next 2025-08-14T21:22:30.2465226Z * [new branch] gh/huydhn/2/orig -> origin/gh/huydhn/2/orig 2025-08-14T21:22:30.2473888Z * [new branch] gh/huydhn/3/head -> origin/gh/huydhn/3/head 2025-08-14T21:22:30.2474080Z * [new branch] gh/huydhn/3/next -> origin/gh/huydhn/3/next 2025-08-14T21:22:30.2474276Z * [new branch] gh/huydhn/3/orig -> origin/gh/huydhn/3/orig 2025-08-14T21:22:30.2474464Z * [new branch] gh/huydhn/4/head -> origin/gh/huydhn/4/head 2025-08-14T21:22:30.2474962Z * [new branch] gh/huydhn/4/next -> origin/gh/huydhn/4/next 2025-08-14T21:22:30.2475963Z * [new branch] gh/huydhn/4/orig -> origin/gh/huydhn/4/orig 2025-08-14T21:22:30.2477105Z * [new branch] gh/huydhn/5/head -> origin/gh/huydhn/5/head 2025-08-14T21:22:30.2477933Z * [new branch] gh/huydhn/5/next -> origin/gh/huydhn/5/next 2025-08-14T21:22:30.2478849Z * [new branch] gh/huydhn/5/orig -> origin/gh/huydhn/5/orig 2025-08-14T21:22:30.2484103Z * [new branch] gh/huydhn/6/head -> origin/gh/huydhn/6/head 2025-08-14T21:22:30.2484259Z * [new branch] gh/huydhn/6/next -> origin/gh/huydhn/6/next 2025-08-14T21:22:30.2484416Z * [new branch] gh/huydhn/6/orig -> origin/gh/huydhn/6/orig 2025-08-14T21:22:30.2484589Z * [new branch] gh/int3/97/base -> origin/gh/int3/97/base 2025-08-14T21:22:30.2484770Z * [new branch] gh/int3/97/head -> origin/gh/int3/97/head 2025-08-14T21:22:30.2486304Z * [new branch] gh/isuruf/101/base -> origin/gh/isuruf/101/base 2025-08-14T21:22:30.2487188Z * [new branch] gh/isuruf/101/head -> origin/gh/isuruf/101/head 2025-08-14T21:22:30.2488445Z * [new branch] gh/isuruf/116/base -> origin/gh/isuruf/116/base 2025-08-14T21:22:30.2489385Z * [new branch] gh/isuruf/116/head -> origin/gh/isuruf/116/head 2025-08-14T21:22:30.2490429Z * [new branch] gh/isuruf/116/orig -> origin/gh/isuruf/116/orig 2025-08-14T21:22:30.2491582Z * [new branch] gh/isuruf/141/base -> origin/gh/isuruf/141/base 2025-08-14T21:22:30.2492452Z * [new branch] gh/isuruf/141/head -> origin/gh/isuruf/141/head 2025-08-14T21:22:30.2493391Z * [new branch] gh/isuruf/141/orig -> origin/gh/isuruf/141/orig 2025-08-14T21:22:30.2502847Z * [new branch] gh/isuruf/142/base -> origin/gh/isuruf/142/base 2025-08-14T21:22:30.2503040Z * [new branch] gh/isuruf/142/head -> origin/gh/isuruf/142/head 2025-08-14T21:22:30.2503245Z * [new branch] gh/isuruf/142/orig -> origin/gh/isuruf/142/orig 2025-08-14T21:22:30.2503446Z * [new branch] gh/isuruf/81/base -> origin/gh/isuruf/81/base 2025-08-14T21:22:30.2503648Z * [new branch] gh/isuruf/81/head -> origin/gh/isuruf/81/head 2025-08-14T21:22:30.2503943Z * [new branch] gh/isuruf/81/orig -> origin/gh/isuruf/81/orig 2025-08-14T21:22:30.2505548Z * [new branch] gh/jamesjwu/140/base -> origin/gh/jamesjwu/140/base 2025-08-14T21:22:30.2506447Z * [new branch] gh/jamesjwu/140/head -> origin/gh/jamesjwu/140/head 2025-08-14T21:22:30.2507367Z * [new branch] gh/jamesjwu/140/orig -> origin/gh/jamesjwu/140/orig 2025-08-14T21:22:30.2508625Z * [new branch] gh/jamesjwu/150/base -> origin/gh/jamesjwu/150/base 2025-08-14T21:22:30.2513292Z * [new branch] gh/jamesjwu/150/head -> origin/gh/jamesjwu/150/head 2025-08-14T21:22:30.2513497Z * [new branch] gh/jamesjwu/150/orig -> origin/gh/jamesjwu/150/orig 2025-08-14T21:22:30.2513694Z * [new branch] gh/jamesjwu/154/base -> origin/gh/jamesjwu/154/base 2025-08-14T21:22:30.2513889Z * [new branch] gh/jamesjwu/154/head -> origin/gh/jamesjwu/154/head 2025-08-14T21:22:30.2514152Z * [new branch] gh/jamesjwu/154/orig -> origin/gh/jamesjwu/154/orig 2025-08-14T21:22:30.2515832Z * [new branch] gh/jamesjwu/155/base -> origin/gh/jamesjwu/155/base 2025-08-14T21:22:30.2516100Z * [new branch] gh/jamesjwu/155/head -> origin/gh/jamesjwu/155/head 2025-08-14T21:22:30.2516993Z * [new branch] gh/jamesjwu/155/orig -> origin/gh/jamesjwu/155/orig 2025-08-14T21:22:30.2518236Z * [new branch] gh/jamesjwu/159/base -> origin/gh/jamesjwu/159/base 2025-08-14T21:22:30.2519158Z * [new branch] gh/jamesjwu/159/head -> origin/gh/jamesjwu/159/head 2025-08-14T21:22:30.2520075Z * [new branch] gh/jamesjwu/159/orig -> origin/gh/jamesjwu/159/orig 2025-08-14T21:22:30.2521796Z * [new branch] gh/jamesjwu/163/base -> origin/gh/jamesjwu/163/base 2025-08-14T21:22:30.2522745Z * [new branch] gh/jamesjwu/163/head -> origin/gh/jamesjwu/163/head 2025-08-14T21:22:30.2523741Z * [new branch] gh/jamesjwu/163/orig -> origin/gh/jamesjwu/163/orig 2025-08-14T21:22:30.2529373Z * [new branch] gh/jamesjwu/171/base -> origin/gh/jamesjwu/171/base 2025-08-14T21:22:30.2530273Z * [new branch] gh/jamesjwu/171/head -> origin/gh/jamesjwu/171/head 2025-08-14T21:22:30.2531136Z * [new branch] gh/jamesjwu/171/orig -> origin/gh/jamesjwu/171/orig 2025-08-14T21:22:30.2532430Z * [new branch] gh/jamesjwu/174/base -> origin/gh/jamesjwu/174/base 2025-08-14T21:22:30.2533339Z * [new branch] gh/jamesjwu/174/head -> origin/gh/jamesjwu/174/head 2025-08-14T21:22:30.2534336Z * [new branch] gh/jamesjwu/174/orig -> origin/gh/jamesjwu/174/orig 2025-08-14T21:22:30.2535480Z * [new branch] gh/jamesjwu/175/base -> origin/gh/jamesjwu/175/base 2025-08-14T21:22:30.2536445Z * [new branch] gh/jamesjwu/175/head -> origin/gh/jamesjwu/175/head 2025-08-14T21:22:30.2537279Z * [new branch] gh/jamesjwu/175/orig -> origin/gh/jamesjwu/175/orig 2025-08-14T21:22:30.2544394Z * [new branch] gh/jamesjwu/176/base -> origin/gh/jamesjwu/176/base 2025-08-14T21:22:30.2544607Z * [new branch] gh/jamesjwu/176/head -> origin/gh/jamesjwu/176/head 2025-08-14T21:22:30.2544812Z * [new branch] gh/jamesjwu/176/orig -> origin/gh/jamesjwu/176/orig 2025-08-14T21:22:30.2545024Z * [new branch] gh/jamesjwu/177/base -> origin/gh/jamesjwu/177/base 2025-08-14T21:22:30.2545223Z * [new branch] gh/jamesjwu/177/head -> origin/gh/jamesjwu/177/head 2025-08-14T21:22:30.2545437Z * [new branch] gh/jamesjwu/177/orig -> origin/gh/jamesjwu/177/orig 2025-08-14T21:22:30.2546071Z * [new branch] gh/jamesjwu/178/base -> origin/gh/jamesjwu/178/base 2025-08-14T21:22:30.2547133Z * [new branch] gh/jamesjwu/178/head -> origin/gh/jamesjwu/178/head 2025-08-14T21:22:30.2548085Z * [new branch] gh/jamesjwu/178/orig -> origin/gh/jamesjwu/178/orig 2025-08-14T21:22:30.2571349Z * [new branch] gh/jamesjwu/179/base -> origin/gh/jamesjwu/179/base 2025-08-14T21:22:30.2571569Z * [new branch] gh/jamesjwu/179/head -> origin/gh/jamesjwu/179/head 2025-08-14T21:22:30.2571774Z * [new branch] gh/jamesjwu/179/orig -> origin/gh/jamesjwu/179/orig 2025-08-14T21:22:30.2571979Z * [new branch] gh/jamesjwu/180/base -> origin/gh/jamesjwu/180/base 2025-08-14T21:22:30.2572196Z * [new branch] gh/jamesjwu/180/head -> origin/gh/jamesjwu/180/head 2025-08-14T21:22:30.2572682Z * [new branch] gh/jamesjwu/180/orig -> origin/gh/jamesjwu/180/orig 2025-08-14T21:22:30.2573999Z * [new branch] gh/jamesjwu/181/base -> origin/gh/jamesjwu/181/base 2025-08-14T21:22:30.2574937Z * [new branch] gh/jamesjwu/181/head -> origin/gh/jamesjwu/181/head 2025-08-14T21:22:30.2575830Z * [new branch] gh/jamesjwu/181/orig -> origin/gh/jamesjwu/181/orig 2025-08-14T21:22:30.2577261Z * [new branch] gh/jamesjwu/182/base -> origin/gh/jamesjwu/182/base 2025-08-14T21:22:30.2578166Z * [new branch] gh/jamesjwu/182/head -> origin/gh/jamesjwu/182/head 2025-08-14T21:22:30.2579062Z * [new branch] gh/jamesjwu/182/orig -> origin/gh/jamesjwu/182/orig 2025-08-14T21:22:30.2580676Z * [new branch] gh/jamesjwu/183/base -> origin/gh/jamesjwu/183/base 2025-08-14T21:22:30.2581749Z * [new branch] gh/jamesjwu/183/head -> origin/gh/jamesjwu/183/head 2025-08-14T21:22:30.2582752Z * [new branch] gh/jamesjwu/183/orig -> origin/gh/jamesjwu/183/orig 2025-08-14T21:22:30.2584039Z * [new branch] gh/jamesjwu/184/base -> origin/gh/jamesjwu/184/base 2025-08-14T21:22:30.2584930Z * [new branch] gh/jamesjwu/184/head -> origin/gh/jamesjwu/184/head 2025-08-14T21:22:30.2585861Z * [new branch] gh/jamesjwu/184/orig -> origin/gh/jamesjwu/184/orig 2025-08-14T21:22:30.2587279Z * [new branch] gh/jamesjwu/52/base -> origin/gh/jamesjwu/52/base 2025-08-14T21:22:30.2588355Z * [new branch] gh/jamesjwu/52/head -> origin/gh/jamesjwu/52/head 2025-08-14T21:22:30.2589536Z * [new branch] gh/jamesjwu/53/base -> origin/gh/jamesjwu/53/base 2025-08-14T21:22:30.2590441Z * [new branch] gh/jamesjwu/53/head -> origin/gh/jamesjwu/53/head 2025-08-14T21:22:30.2591694Z * [new branch] gh/jamesjwu/54/base -> origin/gh/jamesjwu/54/base 2025-08-14T21:22:30.2592563Z * [new branch] gh/jamesjwu/54/head -> origin/gh/jamesjwu/54/head 2025-08-14T21:22:30.2593784Z * [new branch] gh/jamesjwu/55/base -> origin/gh/jamesjwu/55/base 2025-08-14T21:22:30.2594585Z * [new branch] gh/jamesjwu/55/head -> origin/gh/jamesjwu/55/head 2025-08-14T21:22:30.2595769Z * [new branch] gh/jamesjwu/56/base -> origin/gh/jamesjwu/56/base 2025-08-14T21:22:30.2604533Z * [new branch] gh/jamesjwu/56/head -> origin/gh/jamesjwu/56/head 2025-08-14T21:22:30.2604737Z * [new branch] gh/jamesjwu/57/base -> origin/gh/jamesjwu/57/base 2025-08-14T21:22:30.2605387Z * [new branch] gh/jamesjwu/57/head -> origin/gh/jamesjwu/57/head 2025-08-14T21:22:30.2606605Z * [new branch] gh/jamesjwu/58/base -> origin/gh/jamesjwu/58/base 2025-08-14T21:22:30.2607490Z * [new branch] gh/jamesjwu/58/head -> origin/gh/jamesjwu/58/head 2025-08-14T21:22:30.2608591Z * [new branch] gh/jamesjwu/59/base -> origin/gh/jamesjwu/59/base 2025-08-14T21:22:30.2609446Z * [new branch] gh/jamesjwu/59/head -> origin/gh/jamesjwu/59/head 2025-08-14T21:22:30.2612500Z * [new branch] gh/jamesjwu/60/base -> origin/gh/jamesjwu/60/base 2025-08-14T21:22:30.2612664Z * [new branch] gh/jamesjwu/60/head -> origin/gh/jamesjwu/60/head 2025-08-14T21:22:30.2613267Z * [new branch] gh/jamesjwu/61/base -> origin/gh/jamesjwu/61/base 2025-08-14T21:22:30.2618746Z * [new branch] gh/jamesjwu/61/head -> origin/gh/jamesjwu/61/head 2025-08-14T21:22:30.2618971Z * [new branch] gh/jamesjwu/62/base -> origin/gh/jamesjwu/62/base 2025-08-14T21:22:30.2619184Z * [new branch] gh/jamesjwu/62/head -> origin/gh/jamesjwu/62/head 2025-08-14T21:22:30.2619364Z * [new branch] gh/jamesjwu/63/base -> origin/gh/jamesjwu/63/base 2025-08-14T21:22:30.2619534Z * [new branch] gh/jamesjwu/63/head -> origin/gh/jamesjwu/63/head 2025-08-14T21:22:30.2620122Z * [new branch] gh/jamesjwu/64/base -> origin/gh/jamesjwu/64/base 2025-08-14T21:22:30.2621062Z * [new branch] gh/jamesjwu/64/head -> origin/gh/jamesjwu/64/head 2025-08-14T21:22:30.2622231Z * [new branch] gh/jamesjwu/65/base -> origin/gh/jamesjwu/65/base 2025-08-14T21:22:30.2623054Z * [new branch] gh/jamesjwu/65/head -> origin/gh/jamesjwu/65/head 2025-08-14T21:22:30.2626707Z * [new branch] gh/janeyx99/165/base -> origin/gh/janeyx99/165/base 2025-08-14T21:22:30.2629924Z * [new branch] gh/janeyx99/165/head -> origin/gh/janeyx99/165/head 2025-08-14T21:22:30.2630866Z * [new branch] gh/janeyx99/165/orig -> origin/gh/janeyx99/165/orig 2025-08-14T21:22:30.2632013Z * [new branch] gh/janeyx99/201/base -> origin/gh/janeyx99/201/base 2025-08-14T21:22:30.2633003Z * [new branch] gh/janeyx99/201/head -> origin/gh/janeyx99/201/head 2025-08-14T21:22:30.2633951Z * [new branch] gh/janeyx99/201/orig -> origin/gh/janeyx99/201/orig 2025-08-14T21:22:30.2635446Z * [new branch] gh/janeyx99/225/base -> origin/gh/janeyx99/225/base 2025-08-14T21:22:30.2636361Z * [new branch] gh/janeyx99/225/head -> origin/gh/janeyx99/225/head 2025-08-14T21:22:30.2637255Z * [new branch] gh/janeyx99/225/orig -> origin/gh/janeyx99/225/orig 2025-08-14T21:22:30.2638496Z * [new branch] gh/janeyx99/256/base -> origin/gh/janeyx99/256/base 2025-08-14T21:22:30.2643588Z * [new branch] gh/janeyx99/256/head -> origin/gh/janeyx99/256/head 2025-08-14T21:22:30.2643783Z * [new branch] gh/janeyx99/256/orig -> origin/gh/janeyx99/256/orig 2025-08-14T21:22:30.2643979Z * [new branch] gh/janeyx99/268/base -> origin/gh/janeyx99/268/base 2025-08-14T21:22:30.2644231Z * [new branch] gh/janeyx99/268/head -> origin/gh/janeyx99/268/head 2025-08-14T21:22:30.2644433Z * [new branch] gh/janeyx99/268/orig -> origin/gh/janeyx99/268/orig 2025-08-14T21:22:30.2644976Z * [new branch] gh/janeyx99/269/base -> origin/gh/janeyx99/269/base 2025-08-14T21:22:30.2646107Z * [new branch] gh/janeyx99/269/head -> origin/gh/janeyx99/269/head 2025-08-14T21:22:30.2646957Z * [new branch] gh/janeyx99/269/orig -> origin/gh/janeyx99/269/orig 2025-08-14T21:22:30.2648217Z * [new branch] gh/janeyx99/274/base -> origin/gh/janeyx99/274/base 2025-08-14T21:22:30.2649576Z * [new branch] gh/janeyx99/274/head -> origin/gh/janeyx99/274/head 2025-08-14T21:22:30.2650943Z * [new branch] gh/janeyx99/274/orig -> origin/gh/janeyx99/274/orig 2025-08-14T21:22:30.2652247Z * [new branch] gh/janeyx99/276/base -> origin/gh/janeyx99/276/base 2025-08-14T21:22:30.2653238Z * [new branch] gh/janeyx99/276/head -> origin/gh/janeyx99/276/head 2025-08-14T21:22:30.2662323Z * [new branch] gh/janeyx99/276/orig -> origin/gh/janeyx99/276/orig 2025-08-14T21:22:30.2662539Z * [new branch] gh/janeyx99/277/base -> origin/gh/janeyx99/277/base 2025-08-14T21:22:30.2662748Z * [new branch] gh/janeyx99/277/head -> origin/gh/janeyx99/277/head 2025-08-14T21:22:30.2662947Z * [new branch] gh/janeyx99/277/orig -> origin/gh/janeyx99/277/orig 2025-08-14T21:22:30.2663109Z * [new branch] gh/janeyx99/278/base -> origin/gh/janeyx99/278/base 2025-08-14T21:22:30.2664043Z * [new branch] gh/janeyx99/278/head -> origin/gh/janeyx99/278/head 2025-08-14T21:22:30.2664957Z * [new branch] gh/janeyx99/278/orig -> origin/gh/janeyx99/278/orig 2025-08-14T21:22:30.2666263Z * [new branch] gh/janeyx99/279/base -> origin/gh/janeyx99/279/base 2025-08-14T21:22:30.2667238Z * [new branch] gh/janeyx99/279/head -> origin/gh/janeyx99/279/head 2025-08-14T21:22:30.2668104Z * [new branch] gh/janeyx99/279/orig -> origin/gh/janeyx99/279/orig 2025-08-14T21:22:30.2672752Z * [new branch] gh/janeyx99/280/base -> origin/gh/janeyx99/280/base 2025-08-14T21:22:30.2672921Z * [new branch] gh/janeyx99/280/head -> origin/gh/janeyx99/280/head 2025-08-14T21:22:30.2673094Z * [new branch] gh/janeyx99/280/orig -> origin/gh/janeyx99/280/orig 2025-08-14T21:22:30.2673255Z * [new branch] gh/janeyx99/281/base -> origin/gh/janeyx99/281/base 2025-08-14T21:22:30.2673726Z * [new branch] gh/janeyx99/281/head -> origin/gh/janeyx99/281/head 2025-08-14T21:22:30.2674679Z * [new branch] gh/janeyx99/281/orig -> origin/gh/janeyx99/281/orig 2025-08-14T21:22:30.2676403Z * [new branch] gh/janeyx99/282/base -> origin/gh/janeyx99/282/base 2025-08-14T21:22:30.2677376Z * [new branch] gh/janeyx99/282/head -> origin/gh/janeyx99/282/head 2025-08-14T21:22:30.2678280Z * [new branch] gh/janeyx99/282/orig -> origin/gh/janeyx99/282/orig 2025-08-14T21:22:30.2679548Z * [new branch] gh/janeyx99/283/base -> origin/gh/janeyx99/283/base 2025-08-14T21:22:30.2680503Z * [new branch] gh/janeyx99/283/head -> origin/gh/janeyx99/283/head 2025-08-14T21:22:30.2681597Z * [new branch] gh/janeyx99/283/orig -> origin/gh/janeyx99/283/orig 2025-08-14T21:22:30.2683166Z * [new branch] gh/janeyx99/284/base -> origin/gh/janeyx99/284/base 2025-08-14T21:22:30.2688412Z * [new branch] gh/janeyx99/284/head -> origin/gh/janeyx99/284/head 2025-08-14T21:22:30.2689291Z * [new branch] gh/janeyx99/284/orig -> origin/gh/janeyx99/284/orig 2025-08-14T21:22:30.2690994Z * [new branch] gh/janeyx99/285/base -> origin/gh/janeyx99/285/base 2025-08-14T21:22:30.2691895Z * [new branch] gh/janeyx99/285/head -> origin/gh/janeyx99/285/head 2025-08-14T21:22:30.2692807Z * [new branch] gh/janeyx99/285/orig -> origin/gh/janeyx99/285/orig 2025-08-14T21:22:30.2694172Z * [new branch] gh/janeyx99/286/base -> origin/gh/janeyx99/286/base 2025-08-14T21:22:30.2695173Z * [new branch] gh/janeyx99/286/head -> origin/gh/janeyx99/286/head 2025-08-14T21:22:30.2696121Z * [new branch] gh/janeyx99/286/orig -> origin/gh/janeyx99/286/orig 2025-08-14T21:22:30.2701684Z * [new branch] gh/janeyx99/287/base -> origin/gh/janeyx99/287/base 2025-08-14T21:22:30.2701863Z * [new branch] gh/janeyx99/287/head -> origin/gh/janeyx99/287/head 2025-08-14T21:22:30.2702026Z * [new branch] gh/janeyx99/287/orig -> origin/gh/janeyx99/287/orig 2025-08-14T21:22:30.2702197Z * [new branch] gh/janeyx99/288/base -> origin/gh/janeyx99/288/base 2025-08-14T21:22:30.2702361Z * [new branch] gh/janeyx99/288/head -> origin/gh/janeyx99/288/head 2025-08-14T21:22:30.2702777Z * [new branch] gh/janeyx99/288/orig -> origin/gh/janeyx99/288/orig 2025-08-14T21:22:30.2704020Z * [new branch] gh/janeyx99/289/base -> origin/gh/janeyx99/289/base 2025-08-14T21:22:30.2704943Z * [new branch] gh/janeyx99/289/head -> origin/gh/janeyx99/289/head 2025-08-14T21:22:30.2705923Z * [new branch] gh/janeyx99/289/orig -> origin/gh/janeyx99/289/orig 2025-08-14T21:22:30.2707420Z * [new branch] gh/janeyx99/290/base -> origin/gh/janeyx99/290/base 2025-08-14T21:22:30.2708413Z * [new branch] gh/janeyx99/290/head -> origin/gh/janeyx99/290/head 2025-08-14T21:22:30.2709233Z * [new branch] gh/janeyx99/290/orig -> origin/gh/janeyx99/290/orig 2025-08-14T21:22:30.2710576Z * [new branch] gh/janeyx99/291/base -> origin/gh/janeyx99/291/base 2025-08-14T21:22:30.2711510Z * [new branch] gh/janeyx99/291/head -> origin/gh/janeyx99/291/head 2025-08-14T21:22:30.2720959Z * [new branch] gh/janeyx99/291/orig -> origin/gh/janeyx99/291/orig 2025-08-14T21:22:30.2722462Z * [new branch] gh/janeyx99/292/base -> origin/gh/janeyx99/292/base 2025-08-14T21:22:30.2723908Z * [new branch] gh/janeyx99/292/head -> origin/gh/janeyx99/292/head 2025-08-14T21:22:30.2724929Z * [new branch] gh/janeyx99/292/orig -> origin/gh/janeyx99/292/orig 2025-08-14T21:22:30.2726267Z * [new branch] gh/janeyx99/293/base -> origin/gh/janeyx99/293/base 2025-08-14T21:22:30.2734888Z * [new branch] gh/janeyx99/293/head -> origin/gh/janeyx99/293/head 2025-08-14T21:22:30.2735075Z * [new branch] gh/janeyx99/293/orig -> origin/gh/janeyx99/293/orig 2025-08-14T21:22:30.2735248Z * [new branch] gh/janeyx99/294/base -> origin/gh/janeyx99/294/base 2025-08-14T21:22:30.2735412Z * [new branch] gh/janeyx99/294/head -> origin/gh/janeyx99/294/head 2025-08-14T21:22:30.2735602Z * [new branch] gh/janeyx99/294/orig -> origin/gh/janeyx99/294/orig 2025-08-14T21:22:30.2735851Z * [new branch] gh/janeyx99/295/base -> origin/gh/janeyx99/295/base 2025-08-14T21:22:30.2736027Z * [new branch] gh/janeyx99/295/head -> origin/gh/janeyx99/295/head 2025-08-14T21:22:30.2736861Z * [new branch] gh/janeyx99/295/orig -> origin/gh/janeyx99/295/orig 2025-08-14T21:22:30.2738139Z * [new branch] gh/janeyx99/296/base -> origin/gh/janeyx99/296/base 2025-08-14T21:22:30.2739079Z * [new branch] gh/janeyx99/296/head -> origin/gh/janeyx99/296/head 2025-08-14T21:22:30.2740018Z * [new branch] gh/janeyx99/296/orig -> origin/gh/janeyx99/296/orig 2025-08-14T21:22:30.2741357Z * [new branch] gh/janeyx99/297/base -> origin/gh/janeyx99/297/base 2025-08-14T21:22:30.2749352Z * [new branch] gh/janeyx99/297/head -> origin/gh/janeyx99/297/head 2025-08-14T21:22:30.2749577Z * [new branch] gh/janeyx99/297/orig -> origin/gh/janeyx99/297/orig 2025-08-14T21:22:30.2749781Z * [new branch] gh/janeyx99/298/base -> origin/gh/janeyx99/298/base 2025-08-14T21:22:30.2749988Z * [new branch] gh/janeyx99/298/head -> origin/gh/janeyx99/298/head 2025-08-14T21:22:30.2750149Z * [new branch] gh/janeyx99/298/orig -> origin/gh/janeyx99/298/orig 2025-08-14T21:22:30.2750310Z * [new branch] gh/janeyx99/299/base -> origin/gh/janeyx99/299/base 2025-08-14T21:22:30.2750483Z * [new branch] gh/janeyx99/299/head -> origin/gh/janeyx99/299/head 2025-08-14T21:22:30.2750652Z * [new branch] gh/janeyx99/299/orig -> origin/gh/janeyx99/299/orig 2025-08-14T21:22:30.2751931Z * [new branch] gh/janeyx99/300/base -> origin/gh/janeyx99/300/base 2025-08-14T21:22:30.2752857Z * [new branch] gh/janeyx99/300/head -> origin/gh/janeyx99/300/head 2025-08-14T21:22:30.2754229Z * [new branch] gh/janeyx99/300/orig -> origin/gh/janeyx99/300/orig 2025-08-14T21:22:30.2759702Z * [new branch] gh/janeyx99/88/base -> origin/gh/janeyx99/88/base 2025-08-14T21:22:30.2760863Z * [new branch] gh/janeyx99/88/head -> origin/gh/janeyx99/88/head 2025-08-14T21:22:30.2761920Z * [new branch] gh/janeyx99/88/orig -> origin/gh/janeyx99/88/orig 2025-08-14T21:22:30.2763533Z * [new branch] gh/jansel/360/base -> origin/gh/jansel/360/base 2025-08-14T21:22:30.2764453Z * [new branch] gh/jansel/360/head -> origin/gh/jansel/360/head 2025-08-14T21:22:30.2765793Z * [new branch] gh/jansel/451/base -> origin/gh/jansel/451/base 2025-08-14T21:22:30.2766995Z * [new branch] gh/jansel/451/head -> origin/gh/jansel/451/head 2025-08-14T21:22:30.2767904Z * [new branch] gh/jansel/451/orig -> origin/gh/jansel/451/orig 2025-08-14T21:22:30.2769155Z * [new branch] gh/jansel/462/base -> origin/gh/jansel/462/base 2025-08-14T21:22:30.2776165Z * [new branch] gh/jansel/462/head -> origin/gh/jansel/462/head 2025-08-14T21:22:30.2776360Z * [new branch] gh/jansel/462/orig -> origin/gh/jansel/462/orig 2025-08-14T21:22:30.2776558Z * [new branch] gh/jansel/531/base -> origin/gh/jansel/531/base 2025-08-14T21:22:30.2776753Z * [new branch] gh/jansel/531/head -> origin/gh/jansel/531/head 2025-08-14T21:22:30.2776945Z * [new branch] gh/jansel/531/orig -> origin/gh/jansel/531/orig 2025-08-14T21:22:30.2777169Z * [new branch] gh/jansel/534/base -> origin/gh/jansel/534/base 2025-08-14T21:22:30.2777365Z * [new branch] gh/jansel/534/head -> origin/gh/jansel/534/head 2025-08-14T21:22:30.2777779Z * [new branch] gh/jansel/534/orig -> origin/gh/jansel/534/orig 2025-08-14T21:22:30.2779379Z * [new branch] gh/jbschlosser/226/base -> origin/gh/jbschlosser/226/base 2025-08-14T21:22:30.2780322Z * [new branch] gh/jbschlosser/226/head -> origin/gh/jbschlosser/226/head 2025-08-14T21:22:30.2781222Z * [new branch] gh/jbschlosser/226/orig -> origin/gh/jbschlosser/226/orig 2025-08-14T21:22:30.2782460Z * [new branch] gh/jbschlosser/239/base -> origin/gh/jbschlosser/239/base 2025-08-14T21:22:30.2783432Z * [new branch] gh/jbschlosser/239/head -> origin/gh/jbschlosser/239/head 2025-08-14T21:22:30.2790666Z * [new branch] gh/jbschlosser/239/orig -> origin/gh/jbschlosser/239/orig 2025-08-14T21:22:30.2790943Z * [new branch] gh/jbschlosser/247/base -> origin/gh/jbschlosser/247/base 2025-08-14T21:22:30.2791191Z * [new branch] gh/jbschlosser/247/head -> origin/gh/jbschlosser/247/head 2025-08-14T21:22:30.2791901Z * [new branch] gh/jbschlosser/247/orig -> origin/gh/jbschlosser/247/orig 2025-08-14T21:22:30.2793554Z * [new branch] gh/jbschlosser/248/base -> origin/gh/jbschlosser/248/base 2025-08-14T21:22:30.2794381Z * [new branch] gh/jbschlosser/248/head -> origin/gh/jbschlosser/248/head 2025-08-14T21:22:30.2795337Z * [new branch] gh/jbschlosser/248/orig -> origin/gh/jbschlosser/248/orig 2025-08-14T21:22:30.2796525Z * [new branch] gh/jbschlosser/249/base -> origin/gh/jbschlosser/249/base 2025-08-14T21:22:30.2797606Z * [new branch] gh/jbschlosser/249/head -> origin/gh/jbschlosser/249/head 2025-08-14T21:22:30.2798597Z * [new branch] gh/jbschlosser/249/orig -> origin/gh/jbschlosser/249/orig 2025-08-14T21:22:30.2803142Z * [new branch] gh/jbschlosser/250/base -> origin/gh/jbschlosser/250/base 2025-08-14T21:22:30.2803407Z * [new branch] gh/jbschlosser/250/head -> origin/gh/jbschlosser/250/head 2025-08-14T21:22:30.2803623Z * [new branch] gh/jbschlosser/250/orig -> origin/gh/jbschlosser/250/orig 2025-08-14T21:22:30.2803804Z * [new branch] gh/jiayisunx/57/base -> origin/gh/jiayisunx/57/base 2025-08-14T21:22:30.2804626Z * [new branch] gh/jiayisunx/57/head -> origin/gh/jiayisunx/57/head 2025-08-14T21:22:30.2805590Z * [new branch] gh/jiayisunx/57/orig -> origin/gh/jiayisunx/57/orig 2025-08-14T21:22:30.2807097Z * [new branch] gh/jiayisunx/59/base -> origin/gh/jiayisunx/59/base 2025-08-14T21:22:30.2807839Z * [new branch] gh/jiayisunx/59/head -> origin/gh/jiayisunx/59/head 2025-08-14T21:22:30.2808748Z * [new branch] gh/jiayisunx/59/orig -> origin/gh/jiayisunx/59/orig 2025-08-14T21:22:30.2809923Z * [new branch] gh/jiayisunx/61/base -> origin/gh/jiayisunx/61/base 2025-08-14T21:22:30.2810890Z * [new branch] gh/jiayisunx/61/head -> origin/gh/jiayisunx/61/head 2025-08-14T21:22:30.2811791Z * [new branch] gh/jiayisunx/61/orig -> origin/gh/jiayisunx/61/orig 2025-08-14T21:22:30.2822056Z * [new branch] gh/jiayisunx/63/base -> origin/gh/jiayisunx/63/base 2025-08-14T21:22:30.2822280Z * [new branch] gh/jiayisunx/63/head -> origin/gh/jiayisunx/63/head 2025-08-14T21:22:30.2822606Z * [new branch] gh/jiayisunx/63/orig -> origin/gh/jiayisunx/63/orig 2025-08-14T21:22:30.2822816Z * [new branch] gh/jiayisunx/64/base -> origin/gh/jiayisunx/64/base 2025-08-14T21:22:30.2823003Z * [new branch] gh/jiayisunx/64/head -> origin/gh/jiayisunx/64/head 2025-08-14T21:22:30.2823184Z * [new branch] gh/jiayisunx/64/orig -> origin/gh/jiayisunx/64/orig 2025-08-14T21:22:30.2823982Z * [new branch] gh/jiayisunx/65/base -> origin/gh/jiayisunx/65/base 2025-08-14T21:22:30.2824918Z * [new branch] gh/jiayisunx/65/head -> origin/gh/jiayisunx/65/head 2025-08-14T21:22:30.2825815Z * [new branch] gh/jiayisunx/65/orig -> origin/gh/jiayisunx/65/orig 2025-08-14T21:22:30.2827071Z * [new branch] gh/jiayisunx/66/base -> origin/gh/jiayisunx/66/base 2025-08-14T21:22:30.2832477Z * [new branch] gh/jiayisunx/66/head -> origin/gh/jiayisunx/66/head 2025-08-14T21:22:30.2832647Z * [new branch] gh/jiayisunx/66/orig -> origin/gh/jiayisunx/66/orig 2025-08-14T21:22:30.2832819Z * [new branch] gh/jiayisunx/67/base -> origin/gh/jiayisunx/67/base 2025-08-14T21:22:30.2833371Z * [new branch] gh/jiayisunx/67/head -> origin/gh/jiayisunx/67/head 2025-08-14T21:22:30.2833613Z * [new branch] gh/jiayisunx/67/orig -> origin/gh/jiayisunx/67/orig 2025-08-14T21:22:30.2833786Z * [new branch] gh/jiayisunx/68/base -> origin/gh/jiayisunx/68/base 2025-08-14T21:22:30.2834485Z * [new branch] gh/jiayisunx/68/head -> origin/gh/jiayisunx/68/head 2025-08-14T21:22:30.2835431Z * [new branch] gh/jiayisunx/68/orig -> origin/gh/jiayisunx/68/orig 2025-08-14T21:22:30.2836866Z * [new branch] gh/jjwu@meta.com/1/base -> origin/gh/jjwu@meta.com/1/base 2025-08-14T21:22:30.2837815Z * [new branch] gh/jjwu@meta.com/1/head -> origin/gh/jjwu@meta.com/1/head 2025-08-14T21:22:30.2839234Z * [new branch] gh/justinchuby/111/base -> origin/gh/justinchuby/111/base 2025-08-14T21:22:30.2840195Z * [new branch] gh/justinchuby/111/head -> origin/gh/justinchuby/111/head 2025-08-14T21:22:30.2841259Z * [new branch] gh/justinchuby/111/orig -> origin/gh/justinchuby/111/orig 2025-08-14T21:22:30.2842828Z * [new branch] gh/kurtamohler/32/base -> origin/gh/kurtamohler/32/base 2025-08-14T21:22:30.2848000Z * [new branch] gh/kurtamohler/32/head -> origin/gh/kurtamohler/32/head 2025-08-14T21:22:30.2849298Z * [new branch] gh/kurtamohler/32/orig -> origin/gh/kurtamohler/32/orig 2025-08-14T21:22:30.2850581Z * [new branch] gh/kurtamohler/33/base -> origin/gh/kurtamohler/33/base 2025-08-14T21:22:30.2851581Z * [new branch] gh/kurtamohler/33/head -> origin/gh/kurtamohler/33/head 2025-08-14T21:22:30.2852475Z * [new branch] gh/kurtamohler/33/orig -> origin/gh/kurtamohler/33/orig 2025-08-14T21:22:30.2853810Z * [new branch] gh/kurtamohler/34/base -> origin/gh/kurtamohler/34/base 2025-08-14T21:22:30.2854742Z * [new branch] gh/kurtamohler/34/head -> origin/gh/kurtamohler/34/head 2025-08-14T21:22:30.2855618Z * [new branch] gh/kurtamohler/34/orig -> origin/gh/kurtamohler/34/orig 2025-08-14T21:22:30.2856850Z * [new branch] gh/kurtamohler/40/base -> origin/gh/kurtamohler/40/base 2025-08-14T21:22:30.2861359Z * [new branch] gh/kurtamohler/40/head -> origin/gh/kurtamohler/40/head 2025-08-14T21:22:30.2861599Z * [new branch] gh/kurtamohler/40/orig -> origin/gh/kurtamohler/40/orig 2025-08-14T21:22:30.2861825Z * [new branch] gh/kurtamohler/41/base -> origin/gh/kurtamohler/41/base 2025-08-14T21:22:30.2862047Z * [new branch] gh/kurtamohler/41/head -> origin/gh/kurtamohler/41/head 2025-08-14T21:22:30.2862278Z * [new branch] gh/kurtamohler/41/orig -> origin/gh/kurtamohler/41/orig 2025-08-14T21:22:30.2863187Z * [new branch] gh/kurtamohler/42/base -> origin/gh/kurtamohler/42/base 2025-08-14T21:22:30.2864148Z * [new branch] gh/kurtamohler/42/head -> origin/gh/kurtamohler/42/head 2025-08-14T21:22:30.2865080Z * [new branch] gh/kurtamohler/42/orig -> origin/gh/kurtamohler/42/orig 2025-08-14T21:22:30.2866407Z * [new branch] gh/kurtamohler/43/base -> origin/gh/kurtamohler/43/base 2025-08-14T21:22:30.2867321Z * [new branch] gh/kurtamohler/43/head -> origin/gh/kurtamohler/43/head 2025-08-14T21:22:30.2868229Z * [new branch] gh/kurtamohler/43/orig -> origin/gh/kurtamohler/43/orig 2025-08-14T21:22:30.2869696Z * [new branch] gh/kurtamohler/44/base -> origin/gh/kurtamohler/44/base 2025-08-14T21:22:30.2870574Z * [new branch] gh/kurtamohler/44/head -> origin/gh/kurtamohler/44/head 2025-08-14T21:22:30.2871651Z * [new branch] gh/kurtamohler/44/orig -> origin/gh/kurtamohler/44/orig 2025-08-14T21:22:30.2881363Z * [new branch] gh/kurtamohler/45/base -> origin/gh/kurtamohler/45/base 2025-08-14T21:22:30.2882256Z * [new branch] gh/kurtamohler/45/head -> origin/gh/kurtamohler/45/head 2025-08-14T21:22:30.2883138Z * [new branch] gh/kurtamohler/45/orig -> origin/gh/kurtamohler/45/orig 2025-08-14T21:22:30.2884353Z * [new branch] gh/kurtamohler/46/base -> origin/gh/kurtamohler/46/base 2025-08-14T21:22:30.2885325Z * [new branch] gh/kurtamohler/46/head -> origin/gh/kurtamohler/46/head 2025-08-14T21:22:30.2894291Z * [new branch] gh/kurtamohler/46/orig -> origin/gh/kurtamohler/46/orig 2025-08-14T21:22:30.2894511Z * [new branch] gh/kwen2501/130/base -> origin/gh/kwen2501/130/base 2025-08-14T21:22:30.2894717Z * [new branch] gh/kwen2501/130/head -> origin/gh/kwen2501/130/head 2025-08-14T21:22:30.2894934Z * [new branch] gh/kwen2501/130/orig -> origin/gh/kwen2501/130/orig 2025-08-14T21:22:30.2895141Z * [new branch] gh/kwen2501/142/base -> origin/gh/kwen2501/142/base 2025-08-14T21:22:30.2895351Z * [new branch] gh/kwen2501/142/head -> origin/gh/kwen2501/142/head 2025-08-14T21:22:30.2895565Z * [new branch] gh/kwen2501/142/orig -> origin/gh/kwen2501/142/orig 2025-08-14T21:22:30.2896731Z * [new branch] gh/kwen2501/15/base -> origin/gh/kwen2501/15/base 2025-08-14T21:22:30.2897660Z * [new branch] gh/kwen2501/15/head -> origin/gh/kwen2501/15/head 2025-08-14T21:22:30.2898899Z * [new branch] gh/kwen2501/156/base -> origin/gh/kwen2501/156/base 2025-08-14T21:22:30.2899862Z * [new branch] gh/kwen2501/156/head -> origin/gh/kwen2501/156/head 2025-08-14T21:22:30.2902491Z * [new branch] gh/kwen2501/156/orig -> origin/gh/kwen2501/156/orig 2025-08-14T21:22:30.2902652Z * [new branch] gh/kwen2501/170/base -> origin/gh/kwen2501/170/base 2025-08-14T21:22:30.2903199Z * [new branch] gh/kwen2501/170/head -> origin/gh/kwen2501/170/head 2025-08-14T21:22:30.2908752Z * [new branch] gh/kwen2501/179/base -> origin/gh/kwen2501/179/base 2025-08-14T21:22:30.2908966Z * [new branch] gh/kwen2501/179/head -> origin/gh/kwen2501/179/head 2025-08-14T21:22:30.2909187Z * [new branch] gh/kwen2501/179/orig -> origin/gh/kwen2501/179/orig 2025-08-14T21:22:30.2909387Z * [new branch] gh/kwen2501/181/base -> origin/gh/kwen2501/181/base 2025-08-14T21:22:30.2909557Z * [new branch] gh/kwen2501/181/head -> origin/gh/kwen2501/181/head 2025-08-14T21:22:30.2910025Z * [new branch] gh/kwen2501/181/orig -> origin/gh/kwen2501/181/orig 2025-08-14T21:22:30.2911253Z * [new branch] gh/kwen2501/183/base -> origin/gh/kwen2501/183/base 2025-08-14T21:22:30.2912169Z * [new branch] gh/kwen2501/183/head -> origin/gh/kwen2501/183/head 2025-08-14T21:22:30.2913128Z * [new branch] gh/kwen2501/183/orig -> origin/gh/kwen2501/183/orig 2025-08-14T21:22:30.2916850Z * [new branch] gh/kwen2501/184/base -> origin/gh/kwen2501/184/base 2025-08-14T21:22:30.2920277Z * [new branch] gh/kwen2501/184/head -> origin/gh/kwen2501/184/head 2025-08-14T21:22:30.2921100Z * [new branch] gh/kwen2501/184/orig -> origin/gh/kwen2501/184/orig 2025-08-14T21:22:30.2922467Z * [new branch] gh/kwen2501/186/base -> origin/gh/kwen2501/186/base 2025-08-14T21:22:30.2923433Z * [new branch] gh/kwen2501/186/head -> origin/gh/kwen2501/186/head 2025-08-14T21:22:30.2924463Z * [new branch] gh/kwen2501/186/orig -> origin/gh/kwen2501/186/orig 2025-08-14T21:22:30.2925458Z * [new branch] gh/kwen2501/187/base -> origin/gh/kwen2501/187/base 2025-08-14T21:22:30.2926524Z * [new branch] gh/kwen2501/187/head -> origin/gh/kwen2501/187/head 2025-08-14T21:22:30.2927459Z * [new branch] gh/kwen2501/187/orig -> origin/gh/kwen2501/187/orig 2025-08-14T21:22:30.2928632Z * [new branch] gh/kwen2501/188/base -> origin/gh/kwen2501/188/base 2025-08-14T21:22:30.2933665Z * [new branch] gh/kwen2501/188/head -> origin/gh/kwen2501/188/head 2025-08-14T21:22:30.2933877Z * [new branch] gh/kwen2501/188/orig -> origin/gh/kwen2501/188/orig 2025-08-14T21:22:30.2934041Z * [new branch] gh/kwen2501/194/base -> origin/gh/kwen2501/194/base 2025-08-14T21:22:30.2934206Z * [new branch] gh/kwen2501/194/head -> origin/gh/kwen2501/194/head 2025-08-14T21:22:30.2934362Z * [new branch] gh/kwen2501/194/orig -> origin/gh/kwen2501/194/orig 2025-08-14T21:22:30.2935337Z * [new branch] gh/kwen2501/195/base -> origin/gh/kwen2501/195/base 2025-08-14T21:22:30.2936330Z * [new branch] gh/kwen2501/195/head -> origin/gh/kwen2501/195/head 2025-08-14T21:22:30.2937182Z * [new branch] gh/kwen2501/195/orig -> origin/gh/kwen2501/195/orig 2025-08-14T21:22:30.2938398Z * [new branch] gh/kwen2501/196/base -> origin/gh/kwen2501/196/base 2025-08-14T21:22:30.2939310Z * [new branch] gh/kwen2501/196/head -> origin/gh/kwen2501/196/head 2025-08-14T21:22:30.2940301Z * [new branch] gh/kwen2501/196/orig -> origin/gh/kwen2501/196/orig 2025-08-14T21:22:30.2941492Z * [new branch] gh/kwen2501/197/base -> origin/gh/kwen2501/197/base 2025-08-14T21:22:30.2942447Z * [new branch] gh/kwen2501/197/head -> origin/gh/kwen2501/197/head 2025-08-14T21:22:30.2943435Z * [new branch] gh/kwen2501/197/orig -> origin/gh/kwen2501/197/orig 2025-08-14T21:22:30.2952942Z * [new branch] gh/kwen2501/198/base -> origin/gh/kwen2501/198/base 2025-08-14T21:22:30.2953129Z * [new branch] gh/kwen2501/198/head -> origin/gh/kwen2501/198/head 2025-08-14T21:22:30.2953416Z * [new branch] gh/kwen2501/198/orig -> origin/gh/kwen2501/198/orig 2025-08-14T21:22:30.2953573Z * [new branch] gh/kwen2501/199/base -> origin/gh/kwen2501/199/base 2025-08-14T21:22:30.2954059Z * [new branch] gh/kwen2501/199/head -> origin/gh/kwen2501/199/head 2025-08-14T21:22:30.2954977Z * [new branch] gh/kwen2501/199/orig -> origin/gh/kwen2501/199/orig 2025-08-14T21:22:30.2956117Z * [new branch] gh/kwen2501/200/base -> origin/gh/kwen2501/200/base 2025-08-14T21:22:30.2957060Z * [new branch] gh/kwen2501/200/head -> origin/gh/kwen2501/200/head 2025-08-14T21:22:30.2957994Z * [new branch] gh/kwen2501/200/orig -> origin/gh/kwen2501/200/orig 2025-08-14T21:22:30.2962715Z * [new branch] gh/kwen2501/201/base -> origin/gh/kwen2501/201/base 2025-08-14T21:22:30.2962896Z * [new branch] gh/kwen2501/201/head -> origin/gh/kwen2501/201/head 2025-08-14T21:22:30.2963063Z * [new branch] gh/kwen2501/201/orig -> origin/gh/kwen2501/201/orig 2025-08-14T21:22:30.2963220Z * [new branch] gh/kwen2501/202/base -> origin/gh/kwen2501/202/base 2025-08-14T21:22:30.2963801Z * [new branch] gh/kwen2501/202/head -> origin/gh/kwen2501/202/head 2025-08-14T21:22:30.2964747Z * [new branch] gh/kwen2501/202/orig -> origin/gh/kwen2501/202/orig 2025-08-14T21:22:30.2966028Z * [new branch] gh/kwen2501/203/base -> origin/gh/kwen2501/203/base 2025-08-14T21:22:30.2966979Z * [new branch] gh/kwen2501/203/head -> origin/gh/kwen2501/203/head 2025-08-14T21:22:30.2968286Z * [new branch] gh/kwen2501/203/orig -> origin/gh/kwen2501/203/orig 2025-08-14T21:22:30.2970176Z * [new branch] gh/laithsakka/152/base -> origin/gh/laithsakka/152/base 2025-08-14T21:22:30.2971014Z * [new branch] gh/laithsakka/152/head -> origin/gh/laithsakka/152/head 2025-08-14T21:22:30.2971946Z * [new branch] gh/laithsakka/152/orig -> origin/gh/laithsakka/152/orig 2025-08-14T21:22:30.2981347Z * [new branch] gh/laithsakka/156/base -> origin/gh/laithsakka/156/base 2025-08-14T21:22:30.2981636Z * [new branch] gh/laithsakka/156/head -> origin/gh/laithsakka/156/head 2025-08-14T21:22:30.2981869Z * [new branch] gh/laithsakka/156/orig -> origin/gh/laithsakka/156/orig 2025-08-14T21:22:30.2982086Z * [new branch] gh/laithsakka/159/base -> origin/gh/laithsakka/159/base 2025-08-14T21:22:30.2982335Z * [new branch] gh/laithsakka/159/head -> origin/gh/laithsakka/159/head 2025-08-14T21:22:30.2982736Z * [new branch] gh/laithsakka/159/orig -> origin/gh/laithsakka/159/orig 2025-08-14T21:22:30.2984057Z * [new branch] gh/laithsakka/160/base -> origin/gh/laithsakka/160/base 2025-08-14T21:22:30.2984962Z * [new branch] gh/laithsakka/160/head -> origin/gh/laithsakka/160/head 2025-08-14T21:22:30.2985817Z * [new branch] gh/laithsakka/160/orig -> origin/gh/laithsakka/160/orig 2025-08-14T21:22:30.2987115Z * [new branch] gh/laithsakka/178/base -> origin/gh/laithsakka/178/base 2025-08-14T21:22:30.2992007Z * [new branch] gh/laithsakka/178/head -> origin/gh/laithsakka/178/head 2025-08-14T21:22:30.2992196Z * [new branch] gh/laithsakka/178/orig -> origin/gh/laithsakka/178/orig 2025-08-14T21:22:30.2992371Z * [new branch] gh/laithsakka/191/base -> origin/gh/laithsakka/191/base 2025-08-14T21:22:30.2992547Z * [new branch] gh/laithsakka/191/head -> origin/gh/laithsakka/191/head 2025-08-14T21:22:30.2992980Z * [new branch] gh/laithsakka/191/orig -> origin/gh/laithsakka/191/orig 2025-08-14T21:22:30.2994225Z * [new branch] gh/laithsakka/234/base -> origin/gh/laithsakka/234/base 2025-08-14T21:22:30.2995148Z * [new branch] gh/laithsakka/234/head -> origin/gh/laithsakka/234/head 2025-08-14T21:22:30.2996089Z * [new branch] gh/laithsakka/234/orig -> origin/gh/laithsakka/234/orig 2025-08-14T21:22:30.2997267Z * [new branch] gh/laithsakka/237/base -> origin/gh/laithsakka/237/base 2025-08-14T21:22:30.2998227Z * [new branch] gh/laithsakka/237/head -> origin/gh/laithsakka/237/head 2025-08-14T21:22:30.2999160Z * [new branch] gh/laithsakka/237/orig -> origin/gh/laithsakka/237/orig 2025-08-14T21:22:30.3000392Z * [new branch] gh/laithsakka/238/base -> origin/gh/laithsakka/238/base 2025-08-14T21:22:30.3001384Z * [new branch] gh/laithsakka/238/head -> origin/gh/laithsakka/238/head 2025-08-14T21:22:30.3010825Z * [new branch] gh/laithsakka/238/orig -> origin/gh/laithsakka/238/orig 2025-08-14T21:22:30.3012138Z * [new branch] gh/laithsakka/239/base -> origin/gh/laithsakka/239/base 2025-08-14T21:22:30.3013125Z * [new branch] gh/laithsakka/239/head -> origin/gh/laithsakka/239/head 2025-08-14T21:22:30.3014020Z * [new branch] gh/laithsakka/239/orig -> origin/gh/laithsakka/239/orig 2025-08-14T21:22:30.3015179Z * [new branch] gh/laithsakka/240/base -> origin/gh/laithsakka/240/base 2025-08-14T21:22:30.3016180Z * [new branch] gh/laithsakka/240/head -> origin/gh/laithsakka/240/head 2025-08-14T21:22:30.3020882Z * [new branch] gh/laithsakka/240/orig -> origin/gh/laithsakka/240/orig 2025-08-14T21:22:30.3021119Z * [new branch] gh/laithsakka/242/base -> origin/gh/laithsakka/242/base 2025-08-14T21:22:30.3021339Z * [new branch] gh/laithsakka/242/head -> origin/gh/laithsakka/242/head 2025-08-14T21:22:30.3021591Z * [new branch] gh/laithsakka/242/orig -> origin/gh/laithsakka/242/orig 2025-08-14T21:22:30.3021990Z * [new branch] gh/laithsakka/243/base -> origin/gh/laithsakka/243/base 2025-08-14T21:22:30.3022921Z * [new branch] gh/laithsakka/243/head -> origin/gh/laithsakka/243/head 2025-08-14T21:22:30.3023808Z * [new branch] gh/laithsakka/243/orig -> origin/gh/laithsakka/243/orig 2025-08-14T21:22:30.3025144Z * [new branch] gh/laithsakka/244/base -> origin/gh/laithsakka/244/base 2025-08-14T21:22:30.3026108Z * [new branch] gh/laithsakka/244/head -> origin/gh/laithsakka/244/head 2025-08-14T21:22:30.3027045Z * [new branch] gh/laithsakka/244/orig -> origin/gh/laithsakka/244/orig 2025-08-14T21:22:30.3028379Z * [new branch] gh/laithsakka/245/base -> origin/gh/laithsakka/245/base 2025-08-14T21:22:30.3029344Z * [new branch] gh/laithsakka/245/head -> origin/gh/laithsakka/245/head 2025-08-14T21:22:30.3030196Z * [new branch] gh/laithsakka/245/orig -> origin/gh/laithsakka/245/orig 2025-08-14T21:22:30.3031675Z * [new branch] gh/laithsakka/246/base -> origin/gh/laithsakka/246/base 2025-08-14T21:22:30.3032988Z * [new branch] gh/laithsakka/246/head -> origin/gh/laithsakka/246/head 2025-08-14T21:22:30.3034023Z * [new branch] gh/laithsakka/246/orig -> origin/gh/laithsakka/246/orig 2025-08-14T21:22:30.3035657Z * [new branch] gh/laithsakka/247/base -> origin/gh/laithsakka/247/base 2025-08-14T21:22:30.3036563Z * [new branch] gh/laithsakka/247/head -> origin/gh/laithsakka/247/head 2025-08-14T21:22:30.3037854Z * [new branch] gh/laithsakka/247/orig -> origin/gh/laithsakka/247/orig 2025-08-14T21:22:30.3039151Z * [new branch] gh/laithsakka/248/base -> origin/gh/laithsakka/248/base 2025-08-14T21:22:30.3040124Z * [new branch] gh/laithsakka/248/head -> origin/gh/laithsakka/248/head 2025-08-14T21:22:30.3041091Z * [new branch] gh/laithsakka/248/orig -> origin/gh/laithsakka/248/orig 2025-08-14T21:22:30.3042550Z * [new branch] gh/laithsakka/249/base -> origin/gh/laithsakka/249/base 2025-08-14T21:22:30.3043473Z * [new branch] gh/laithsakka/249/head -> origin/gh/laithsakka/249/head 2025-08-14T21:22:30.3044413Z * [new branch] gh/laithsakka/249/orig -> origin/gh/laithsakka/249/orig 2025-08-14T21:22:30.3054256Z * [new branch] gh/laithsakka/250/base -> origin/gh/laithsakka/250/base 2025-08-14T21:22:30.3054482Z * [new branch] gh/laithsakka/250/head -> origin/gh/laithsakka/250/head 2025-08-14T21:22:30.3054730Z * [new branch] gh/laithsakka/250/orig -> origin/gh/laithsakka/250/orig 2025-08-14T21:22:30.3056019Z * [new branch] gh/laithsakka/251/base -> origin/gh/laithsakka/251/base 2025-08-14T21:22:30.3056965Z * [new branch] gh/laithsakka/251/head -> origin/gh/laithsakka/251/head 2025-08-14T21:22:30.3057902Z * [new branch] gh/laithsakka/251/orig -> origin/gh/laithsakka/251/orig 2025-08-14T21:22:30.3059179Z * [new branch] gh/laithsakka/252/base -> origin/gh/laithsakka/252/base 2025-08-14T21:22:30.3061980Z * [new branch] gh/laithsakka/252/head -> origin/gh/laithsakka/252/head 2025-08-14T21:22:30.3062157Z * [new branch] gh/laithsakka/252/orig -> origin/gh/laithsakka/252/orig 2025-08-14T21:22:30.3063380Z * [new branch] gh/laithsakka/253/base -> origin/gh/laithsakka/253/base 2025-08-14T21:22:30.3064377Z * [new branch] gh/laithsakka/253/head -> origin/gh/laithsakka/253/head 2025-08-14T21:22:30.3065306Z * [new branch] gh/laithsakka/253/orig -> origin/gh/laithsakka/253/orig 2025-08-14T21:22:30.3067055Z * [new branch] gh/laithsakka/254/base -> origin/gh/laithsakka/254/base 2025-08-14T21:22:30.3067894Z * [new branch] gh/laithsakka/254/head -> origin/gh/laithsakka/254/head 2025-08-14T21:22:30.3068794Z * [new branch] gh/laithsakka/254/orig -> origin/gh/laithsakka/254/orig 2025-08-14T21:22:30.3070032Z * [new branch] gh/laithsakka/255/base -> origin/gh/laithsakka/255/base 2025-08-14T21:22:30.3070923Z * [new branch] gh/laithsakka/255/head -> origin/gh/laithsakka/255/head 2025-08-14T21:22:30.3072106Z * [new branch] gh/laithsakka/255/orig -> origin/gh/laithsakka/255/orig 2025-08-14T21:22:30.3073404Z * [new branch] gh/laithsakka/256/base -> origin/gh/laithsakka/256/base 2025-08-14T21:22:30.3078700Z * [new branch] gh/laithsakka/256/head -> origin/gh/laithsakka/256/head 2025-08-14T21:22:30.3083116Z * [new branch] gh/laithsakka/256/orig -> origin/gh/laithsakka/256/orig 2025-08-14T21:22:30.3083315Z * [new branch] gh/laithsakka/257/base -> origin/gh/laithsakka/257/base 2025-08-14T21:22:30.3083493Z * [new branch] gh/laithsakka/257/head -> origin/gh/laithsakka/257/head 2025-08-14T21:22:30.3083665Z * [new branch] gh/laithsakka/257/orig -> origin/gh/laithsakka/257/orig 2025-08-14T21:22:30.3084242Z * [new branch] gh/laithsakka/258/base -> origin/gh/laithsakka/258/base 2025-08-14T21:22:30.3085194Z * [new branch] gh/laithsakka/258/head -> origin/gh/laithsakka/258/head 2025-08-14T21:22:30.3086111Z * [new branch] gh/laithsakka/258/orig -> origin/gh/laithsakka/258/orig 2025-08-14T21:22:30.3087588Z * [new branch] gh/laithsakka/259/base -> origin/gh/laithsakka/259/base 2025-08-14T21:22:30.3088323Z * [new branch] gh/laithsakka/259/head -> origin/gh/laithsakka/259/head 2025-08-14T21:22:30.3093212Z * [new branch] gh/laithsakka/259/orig -> origin/gh/laithsakka/259/orig 2025-08-14T21:22:30.3093599Z * [new branch] gh/laithsakka/260/base -> origin/gh/laithsakka/260/base 2025-08-14T21:22:30.3093783Z * [new branch] gh/laithsakka/260/head -> origin/gh/laithsakka/260/head 2025-08-14T21:22:30.3094137Z * [new branch] gh/laithsakka/260/orig -> origin/gh/laithsakka/260/orig 2025-08-14T21:22:30.3094442Z * [new branch] gh/laithsakka/261/base -> origin/gh/laithsakka/261/base 2025-08-14T21:22:30.3095530Z * [new branch] gh/laithsakka/261/head -> origin/gh/laithsakka/261/head 2025-08-14T21:22:30.3096180Z * [new branch] gh/laithsakka/261/orig -> origin/gh/laithsakka/261/orig 2025-08-14T21:22:30.3097394Z * [new branch] gh/laithsakka/262/base -> origin/gh/laithsakka/262/base 2025-08-14T21:22:30.3098360Z * [new branch] gh/laithsakka/262/head -> origin/gh/laithsakka/262/head 2025-08-14T21:22:30.3099137Z * [new branch] gh/laithsakka/262/orig -> origin/gh/laithsakka/262/orig 2025-08-14T21:22:30.3100626Z * [new branch] gh/laithsakka/28/base -> origin/gh/laithsakka/28/base 2025-08-14T21:22:30.3101797Z * [new branch] gh/laithsakka/29/base -> origin/gh/laithsakka/29/base 2025-08-14T21:22:30.3102923Z * [new branch] gh/laithsakka/30/base -> origin/gh/laithsakka/30/base 2025-08-14T21:22:30.3112090Z * [new branch] gh/laithsakka/30/head -> origin/gh/laithsakka/30/head 2025-08-14T21:22:30.3112392Z * [new branch] gh/laithsakka/31/base -> origin/gh/laithsakka/31/base 2025-08-14T21:22:30.3112680Z * [new branch] gh/laithsakka/31/head -> origin/gh/laithsakka/31/head 2025-08-14T21:22:30.3112926Z * [new branch] gh/laithsakka/32/base -> origin/gh/laithsakka/32/base 2025-08-14T21:22:30.3113175Z * [new branch] gh/laithsakka/32/head -> origin/gh/laithsakka/32/head 2025-08-14T21:22:30.3115568Z * [new branch] gh/lucaskabela/1/base -> origin/gh/lucaskabela/1/base 2025-08-14T21:22:30.3116302Z * [new branch] gh/lucaskabela/1/head -> origin/gh/lucaskabela/1/head 2025-08-14T21:22:30.3117845Z * [new branch] gh/lucaskabela/10/base -> origin/gh/lucaskabela/10/base 2025-08-14T21:22:30.3122342Z * [new branch] gh/lucaskabela/10/head -> origin/gh/lucaskabela/10/head 2025-08-14T21:22:30.3122582Z * [new branch] gh/lucaskabela/10/orig -> origin/gh/lucaskabela/10/orig 2025-08-14T21:22:30.3122863Z * [new branch] gh/lucaskabela/11/base -> origin/gh/lucaskabela/11/base 2025-08-14T21:22:30.3123094Z * [new branch] gh/lucaskabela/11/head -> origin/gh/lucaskabela/11/head 2025-08-14T21:22:30.3123389Z * [new branch] gh/lucaskabela/11/orig -> origin/gh/lucaskabela/11/orig 2025-08-14T21:22:30.3124075Z * [new branch] gh/lucaskabela/12/base -> origin/gh/lucaskabela/12/base 2025-08-14T21:22:30.3125081Z * [new branch] gh/lucaskabela/12/head -> origin/gh/lucaskabela/12/head 2025-08-14T21:22:30.3125986Z * [new branch] gh/lucaskabela/12/orig -> origin/gh/lucaskabela/12/orig 2025-08-14T21:22:30.3127505Z * [new branch] gh/lucaskabela/13/base -> origin/gh/lucaskabela/13/base 2025-08-14T21:22:30.3128472Z * [new branch] gh/lucaskabela/13/head -> origin/gh/lucaskabela/13/head 2025-08-14T21:22:30.3129185Z * [new branch] gh/lucaskabela/13/orig -> origin/gh/lucaskabela/13/orig 2025-08-14T21:22:30.3130389Z * [new branch] gh/lucaskabela/14/base -> origin/gh/lucaskabela/14/base 2025-08-14T21:22:30.3131250Z * [new branch] gh/lucaskabela/14/head -> origin/gh/lucaskabela/14/head 2025-08-14T21:22:30.3132239Z * [new branch] gh/lucaskabela/14/orig -> origin/gh/lucaskabela/14/orig 2025-08-14T21:22:30.3138487Z * [new branch] gh/lucaskabela/15/base -> origin/gh/lucaskabela/15/base 2025-08-14T21:22:30.3139571Z * [new branch] gh/lucaskabela/15/head -> origin/gh/lucaskabela/15/head 2025-08-14T21:22:30.3140474Z * [new branch] gh/lucaskabela/15/orig -> origin/gh/lucaskabela/15/orig 2025-08-14T21:22:30.3141537Z * [new branch] gh/lucaskabela/16/base -> origin/gh/lucaskabela/16/base 2025-08-14T21:22:30.3142475Z * [new branch] gh/lucaskabela/16/head -> origin/gh/lucaskabela/16/head 2025-08-14T21:22:30.3143327Z * [new branch] gh/lucaskabela/16/orig -> origin/gh/lucaskabela/16/orig 2025-08-14T21:22:30.3144526Z * [new branch] gh/lucaskabela/17/base -> origin/gh/lucaskabela/17/base 2025-08-14T21:22:30.3145298Z * [new branch] gh/lucaskabela/17/head -> origin/gh/lucaskabela/17/head 2025-08-14T21:22:30.3146297Z * [new branch] gh/lucaskabela/17/orig -> origin/gh/lucaskabela/17/orig 2025-08-14T21:22:30.3153495Z * [new branch] gh/lucaskabela/2/base -> origin/gh/lucaskabela/2/base 2025-08-14T21:22:30.3153757Z * [new branch] gh/lucaskabela/2/head -> origin/gh/lucaskabela/2/head 2025-08-14T21:22:30.3154027Z * [new branch] gh/lucaskabela/2/orig -> origin/gh/lucaskabela/2/orig 2025-08-14T21:22:30.3154315Z * [new branch] gh/lucaskabela/3/base -> origin/gh/lucaskabela/3/base 2025-08-14T21:22:30.3154611Z * [new branch] gh/lucaskabela/3/head -> origin/gh/lucaskabela/3/head 2025-08-14T21:22:30.3154867Z * [new branch] gh/lucaskabela/3/orig -> origin/gh/lucaskabela/3/orig 2025-08-14T21:22:30.3155174Z * [new branch] gh/lucaskabela/4/base -> origin/gh/lucaskabela/4/base 2025-08-14T21:22:30.3155497Z * [new branch] gh/lucaskabela/4/head -> origin/gh/lucaskabela/4/head 2025-08-14T21:22:30.3156516Z * [new branch] gh/lucaskabela/4/orig -> origin/gh/lucaskabela/4/orig 2025-08-14T21:22:30.3157770Z * [new branch] gh/lucaskabela/5/base -> origin/gh/lucaskabela/5/base 2025-08-14T21:22:30.3158475Z * [new branch] gh/lucaskabela/5/head -> origin/gh/lucaskabela/5/head 2025-08-14T21:22:30.3159447Z * [new branch] gh/lucaskabela/5/orig -> origin/gh/lucaskabela/5/orig 2025-08-14T21:22:30.3160581Z * [new branch] gh/lucaskabela/6/base -> origin/gh/lucaskabela/6/base 2025-08-14T21:22:30.3161731Z * [new branch] gh/lucaskabela/6/head -> origin/gh/lucaskabela/6/head 2025-08-14T21:22:30.3171008Z * [new branch] gh/lucaskabela/6/orig -> origin/gh/lucaskabela/6/orig 2025-08-14T21:22:30.3172486Z * [new branch] gh/lucaskabela/7/base -> origin/gh/lucaskabela/7/base 2025-08-14T21:22:30.3173322Z * [new branch] gh/lucaskabela/7/head -> origin/gh/lucaskabela/7/head 2025-08-14T21:22:30.3174497Z * [new branch] gh/lucaskabela/7/orig -> origin/gh/lucaskabela/7/orig 2025-08-14T21:22:30.3175763Z * [new branch] gh/lucaskabela/8/base -> origin/gh/lucaskabela/8/base 2025-08-14T21:22:30.3184303Z * [new branch] gh/lucaskabela/8/head -> origin/gh/lucaskabela/8/head 2025-08-14T21:22:30.3184616Z * [new branch] gh/lucaskabela/8/orig -> origin/gh/lucaskabela/8/orig 2025-08-14T21:22:30.3184863Z * [new branch] gh/lucaskabela/9/base -> origin/gh/lucaskabela/9/base 2025-08-14T21:22:30.3185111Z * [new branch] gh/lucaskabela/9/head -> origin/gh/lucaskabela/9/head 2025-08-14T21:22:30.3185390Z * [new branch] gh/lucaskabela/9/orig -> origin/gh/lucaskabela/9/orig 2025-08-14T21:22:30.3185595Z * [new branch] gh/lw/1/base -> origin/gh/lw/1/base 2025-08-14T21:22:30.3185893Z * [new branch] gh/lw/1/head -> origin/gh/lw/1/head 2025-08-14T21:22:30.3186730Z * [new branch] gh/lw/1/orig -> origin/gh/lw/1/orig 2025-08-14T21:22:30.3188068Z * [new branch] gh/lw/2/base -> origin/gh/lw/2/base 2025-08-14T21:22:30.3188944Z * [new branch] gh/lw/2/head -> origin/gh/lw/2/head 2025-08-14T21:22:30.3189868Z * [new branch] gh/lw/2/orig -> origin/gh/lw/2/orig 2025-08-14T21:22:30.3191377Z * [new branch] gh/lw/3/base -> origin/gh/lw/3/base 2025-08-14T21:22:30.3192217Z * [new branch] gh/lw/3/head -> origin/gh/lw/3/head 2025-08-14T21:22:30.3193241Z * [new branch] gh/lw/3/orig -> origin/gh/lw/3/orig 2025-08-14T21:22:30.3194709Z * [new branch] gh/malfet/14/base -> origin/gh/malfet/14/base 2025-08-14T21:22:30.3196334Z * [new branch] gh/malfet/330/base -> origin/gh/malfet/330/base 2025-08-14T21:22:30.3197304Z * [new branch] gh/malfet/330/head -> origin/gh/malfet/330/head 2025-08-14T21:22:30.3198256Z * [new branch] gh/malfet/330/orig -> origin/gh/malfet/330/orig 2025-08-14T21:22:30.3199523Z * [new branch] gh/malfet/396/base -> origin/gh/malfet/396/base 2025-08-14T21:22:30.3200394Z * [new branch] gh/malfet/396/head -> origin/gh/malfet/396/head 2025-08-14T21:22:30.3201486Z * [new branch] gh/malfet/396/orig -> origin/gh/malfet/396/orig 2025-08-14T21:22:30.3202716Z * [new branch] gh/malfet/397/base -> origin/gh/malfet/397/base 2025-08-14T21:22:30.3203642Z * [new branch] gh/malfet/397/head -> origin/gh/malfet/397/head 2025-08-14T21:22:30.3204708Z * [new branch] gh/malfet/397/orig -> origin/gh/malfet/397/orig 2025-08-14T21:22:30.3213546Z * [new branch] gh/malfet/398/base -> origin/gh/malfet/398/base 2025-08-14T21:22:30.3213876Z * [new branch] gh/malfet/398/head -> origin/gh/malfet/398/head 2025-08-14T21:22:30.3214122Z * [new branch] gh/malfet/398/orig -> origin/gh/malfet/398/orig 2025-08-14T21:22:30.3214417Z * [new branch] gh/malfet/399/base -> origin/gh/malfet/399/base 2025-08-14T21:22:30.3214676Z * [new branch] gh/malfet/399/head -> origin/gh/malfet/399/head 2025-08-14T21:22:30.3215644Z * [new branch] gh/malfet/399/orig -> origin/gh/malfet/399/orig 2025-08-14T21:22:30.3216856Z * [new branch] gh/malfet/414/base -> origin/gh/malfet/414/base 2025-08-14T21:22:30.3217794Z * [new branch] gh/malfet/414/head -> origin/gh/malfet/414/head 2025-08-14T21:22:30.3218740Z * [new branch] gh/malfet/414/orig -> origin/gh/malfet/414/orig 2025-08-14T21:22:30.3225834Z * [new branch] gh/malfet/417/base -> origin/gh/malfet/417/base 2025-08-14T21:22:30.3226135Z * [new branch] gh/malfet/417/head -> origin/gh/malfet/417/head 2025-08-14T21:22:30.3226373Z * [new branch] gh/malfet/417/orig -> origin/gh/malfet/417/orig 2025-08-14T21:22:30.3226602Z * [new branch] gh/malfet/418/base -> origin/gh/malfet/418/base 2025-08-14T21:22:30.3226875Z * [new branch] gh/malfet/418/head -> origin/gh/malfet/418/head 2025-08-14T21:22:30.3227082Z * [new branch] gh/malfet/418/orig -> origin/gh/malfet/418/orig 2025-08-14T21:22:30.3227406Z * [new branch] gh/malfet/422/base -> origin/gh/malfet/422/base 2025-08-14T21:22:30.3227629Z * [new branch] gh/malfet/422/head -> origin/gh/malfet/422/head 2025-08-14T21:22:30.3228140Z * [new branch] gh/malfet/422/orig -> origin/gh/malfet/422/orig 2025-08-14T21:22:30.3229516Z * [new branch] gh/malfet/438/base -> origin/gh/malfet/438/base 2025-08-14T21:22:30.3230306Z * [new branch] gh/malfet/438/head -> origin/gh/malfet/438/head 2025-08-14T21:22:30.3231285Z * [new branch] gh/malfet/438/orig -> origin/gh/malfet/438/orig 2025-08-14T21:22:30.3232451Z * [new branch] gh/malfet/439/base -> origin/gh/malfet/439/base 2025-08-14T21:22:30.3233393Z * [new branch] gh/malfet/439/head -> origin/gh/malfet/439/head 2025-08-14T21:22:30.3238675Z * [new branch] gh/malfet/439/orig -> origin/gh/malfet/439/orig 2025-08-14T21:22:30.3242952Z * [new branch] gh/malfet/440/base -> origin/gh/malfet/440/base 2025-08-14T21:22:30.3243229Z * [new branch] gh/malfet/440/head -> origin/gh/malfet/440/head 2025-08-14T21:22:30.3243523Z * [new branch] gh/malfet/440/orig -> origin/gh/malfet/440/orig 2025-08-14T21:22:30.3243766Z * [new branch] gh/malfet/441/base -> origin/gh/malfet/441/base 2025-08-14T21:22:30.3244192Z * [new branch] gh/malfet/441/head -> origin/gh/malfet/441/head 2025-08-14T21:22:30.3245230Z * [new branch] gh/malfet/441/orig -> origin/gh/malfet/441/orig 2025-08-14T21:22:30.3246482Z * [new branch] gh/malfet/442/base -> origin/gh/malfet/442/base 2025-08-14T21:22:30.3247431Z * [new branch] gh/malfet/442/head -> origin/gh/malfet/442/head 2025-08-14T21:22:30.3252806Z * [new branch] gh/malfet/442/orig -> origin/gh/malfet/442/orig 2025-08-14T21:22:30.3253366Z * [new branch] gh/malfet/443/base -> origin/gh/malfet/443/base 2025-08-14T21:22:30.3253613Z * [new branch] gh/malfet/443/head -> origin/gh/malfet/443/head 2025-08-14T21:22:30.3253846Z * [new branch] gh/malfet/443/orig -> origin/gh/malfet/443/orig 2025-08-14T21:22:30.3254303Z * [new branch] gh/malfet/444/base -> origin/gh/malfet/444/base 2025-08-14T21:22:30.3254725Z * [new branch] gh/malfet/444/head -> origin/gh/malfet/444/head 2025-08-14T21:22:30.3255723Z * [new branch] gh/malfet/444/orig -> origin/gh/malfet/444/orig 2025-08-14T21:22:30.3257270Z * [new branch] gh/malfet/445/base -> origin/gh/malfet/445/base 2025-08-14T21:22:30.3258234Z * [new branch] gh/malfet/445/head -> origin/gh/malfet/445/head 2025-08-14T21:22:30.3259135Z * [new branch] gh/malfet/445/orig -> origin/gh/malfet/445/orig 2025-08-14T21:22:30.3260653Z * [new branch] gh/malfet/446/base -> origin/gh/malfet/446/base 2025-08-14T21:22:30.3261781Z * [new branch] gh/malfet/446/head -> origin/gh/malfet/446/head 2025-08-14T21:22:30.3262795Z * [new branch] gh/malfet/446/orig -> origin/gh/malfet/446/orig 2025-08-14T21:22:30.3271380Z * [new branch] gh/malfet/447/base -> origin/gh/malfet/447/base 2025-08-14T21:22:30.3271628Z * [new branch] gh/malfet/447/head -> origin/gh/malfet/447/head 2025-08-14T21:22:30.3271921Z * [new branch] gh/malfet/448/base -> origin/gh/malfet/448/base 2025-08-14T21:22:30.3272171Z * [new branch] gh/malfet/448/head -> origin/gh/malfet/448/head 2025-08-14T21:22:30.3272847Z * [new branch] gh/malfet/449/base -> origin/gh/malfet/449/base 2025-08-14T21:22:30.3273760Z * [new branch] gh/malfet/449/head -> origin/gh/malfet/449/head 2025-08-14T21:22:30.3274971Z * [new branch] gh/malfet/450/base -> origin/gh/malfet/450/base 2025-08-14T21:22:30.3275937Z * [new branch] gh/malfet/450/head -> origin/gh/malfet/450/head 2025-08-14T21:22:30.3277229Z * [new branch] gh/malfet/451/base -> origin/gh/malfet/451/base 2025-08-14T21:22:30.3281505Z * [new branch] gh/malfet/451/head -> origin/gh/malfet/451/head 2025-08-14T21:22:30.3281797Z * [new branch] gh/malfet/452/base -> origin/gh/malfet/452/base 2025-08-14T21:22:30.3282018Z * [new branch] gh/malfet/452/head -> origin/gh/malfet/452/head 2025-08-14T21:22:30.3282202Z * [new branch] gh/malfet/452/orig -> origin/gh/malfet/452/orig 2025-08-14T21:22:30.3282620Z * [new branch] gh/malfet/453/base -> origin/gh/malfet/453/base 2025-08-14T21:22:30.3283686Z * [new branch] gh/malfet/453/head -> origin/gh/malfet/453/head 2025-08-14T21:22:30.3284602Z * [new branch] gh/malfet/453/orig -> origin/gh/malfet/453/orig 2025-08-14T21:22:30.3285753Z * [new branch] gh/malfet/454/base -> origin/gh/malfet/454/base 2025-08-14T21:22:30.3286645Z * [new branch] gh/malfet/454/head -> origin/gh/malfet/454/head 2025-08-14T21:22:30.3287570Z * [new branch] gh/malfet/454/orig -> origin/gh/malfet/454/orig 2025-08-14T21:22:30.3288734Z * [new branch] gh/malfet/455/base -> origin/gh/malfet/455/base 2025-08-14T21:22:30.3289736Z * [new branch] gh/malfet/455/head -> origin/gh/malfet/455/head 2025-08-14T21:22:30.3290632Z * [new branch] gh/malfet/455/orig -> origin/gh/malfet/455/orig 2025-08-14T21:22:30.3292062Z * [new branch] gh/malfet/456/base -> origin/gh/malfet/456/base 2025-08-14T21:22:30.3297260Z * [new branch] gh/malfet/456/head -> origin/gh/malfet/456/head 2025-08-14T21:22:30.3298185Z * [new branch] gh/malfet/456/orig -> origin/gh/malfet/456/orig 2025-08-14T21:22:30.3299466Z * [new branch] gh/malfet/457/base -> origin/gh/malfet/457/base 2025-08-14T21:22:30.3300373Z * [new branch] gh/malfet/457/head -> origin/gh/malfet/457/head 2025-08-14T21:22:30.3301326Z * [new branch] gh/malfet/457/orig -> origin/gh/malfet/457/orig 2025-08-14T21:22:30.3302532Z * [new branch] gh/malfet/458/base -> origin/gh/malfet/458/base 2025-08-14T21:22:30.3303398Z * [new branch] gh/malfet/458/head -> origin/gh/malfet/458/head 2025-08-14T21:22:30.3304488Z * [new branch] gh/malfet/458/orig -> origin/gh/malfet/458/orig 2025-08-14T21:22:30.3305639Z * [new branch] gh/malfet/459/base -> origin/gh/malfet/459/base 2025-08-14T21:22:30.3311299Z * [new branch] gh/malfet/459/head -> origin/gh/malfet/459/head 2025-08-14T21:22:30.3311493Z * [new branch] gh/malfet/459/orig -> origin/gh/malfet/459/orig 2025-08-14T21:22:30.3311678Z * [new branch] gh/malfet/460/base -> origin/gh/malfet/460/base 2025-08-14T21:22:30.3311941Z * [new branch] gh/malfet/460/head -> origin/gh/malfet/460/head 2025-08-14T21:22:30.3312149Z * [new branch] gh/malfet/460/orig -> origin/gh/malfet/460/orig 2025-08-14T21:22:30.3312381Z * [new branch] gh/malfet/461/base -> origin/gh/malfet/461/base 2025-08-14T21:22:30.3313004Z * [new branch] gh/malfet/461/head -> origin/gh/malfet/461/head 2025-08-14T21:22:30.3313990Z * [new branch] gh/malfet/461/orig -> origin/gh/malfet/461/orig 2025-08-14T21:22:30.3315221Z * [new branch] gh/malfet/462/base -> origin/gh/malfet/462/base 2025-08-14T21:22:30.3316170Z * [new branch] gh/malfet/462/head -> origin/gh/malfet/462/head 2025-08-14T21:22:30.3317076Z * [new branch] gh/malfet/462/orig -> origin/gh/malfet/462/orig 2025-08-14T21:22:30.3318330Z * [new branch] gh/malfet/463/base -> origin/gh/malfet/463/base 2025-08-14T21:22:30.3319333Z * [new branch] gh/malfet/463/head -> origin/gh/malfet/463/head 2025-08-14T21:22:30.3320225Z * [new branch] gh/malfet/463/orig -> origin/gh/malfet/463/orig 2025-08-14T21:22:30.3330076Z * [new branch] gh/malfet/464/base -> origin/gh/malfet/464/base 2025-08-14T21:22:30.3330898Z * [new branch] gh/malfet/464/head -> origin/gh/malfet/464/head 2025-08-14T21:22:30.3331895Z * [new branch] gh/malfet/464/orig -> origin/gh/malfet/464/orig 2025-08-14T21:22:30.3333279Z * [new branch] gh/malfet/465/base -> origin/gh/malfet/465/base 2025-08-14T21:22:30.3334119Z * [new branch] gh/malfet/465/head -> origin/gh/malfet/465/head 2025-08-14T21:22:30.3335112Z * [new branch] gh/malfet/465/orig -> origin/gh/malfet/465/orig 2025-08-14T21:22:30.3344024Z * [new branch] gh/malfet/466/base -> origin/gh/malfet/466/base 2025-08-14T21:22:30.3344297Z * [new branch] gh/malfet/466/head -> origin/gh/malfet/466/head 2025-08-14T21:22:30.3344531Z * [new branch] gh/malfet/466/orig -> origin/gh/malfet/466/orig 2025-08-14T21:22:30.3344775Z * [new branch] gh/malfet/467/base -> origin/gh/malfet/467/base 2025-08-14T21:22:30.3345023Z * [new branch] gh/malfet/467/head -> origin/gh/malfet/467/head 2025-08-14T21:22:30.3345289Z * [new branch] gh/malfet/467/orig -> origin/gh/malfet/467/orig 2025-08-14T21:22:30.3345571Z * [new branch] gh/malfet/468/base -> origin/gh/malfet/468/base 2025-08-14T21:22:30.3345916Z * [new branch] gh/malfet/468/head -> origin/gh/malfet/468/head 2025-08-14T21:22:30.3346871Z * [new branch] gh/malfet/468/orig -> origin/gh/malfet/468/orig 2025-08-14T21:22:30.3348131Z * [new branch] gh/malfet/469/base -> origin/gh/malfet/469/base 2025-08-14T21:22:30.3349335Z * [new branch] gh/malfet/469/head -> origin/gh/malfet/469/head 2025-08-14T21:22:30.3352389Z * [new branch] gh/malfet/469/orig -> origin/gh/malfet/469/orig 2025-08-14T21:22:30.3352587Z * [new branch] gh/malfet/470/base -> origin/gh/malfet/470/base 2025-08-14T21:22:30.3352960Z * [new branch] gh/malfet/470/head -> origin/gh/malfet/470/head 2025-08-14T21:22:30.3353932Z * [new branch] gh/malfet/470/orig -> origin/gh/malfet/470/orig 2025-08-14T21:22:30.3355148Z * [new branch] gh/malfet/471/base -> origin/gh/malfet/471/base 2025-08-14T21:22:30.3356154Z * [new branch] gh/malfet/471/head -> origin/gh/malfet/471/head 2025-08-14T21:22:30.3357016Z * [new branch] gh/malfet/471/orig -> origin/gh/malfet/471/orig 2025-08-14T21:22:30.3358418Z * [new branch] gh/malfet/472/base -> origin/gh/malfet/472/base 2025-08-14T21:22:30.3359179Z * [new branch] gh/malfet/472/head -> origin/gh/malfet/472/head 2025-08-14T21:22:30.3360268Z * [new branch] gh/malfet/472/orig -> origin/gh/malfet/472/orig 2025-08-14T21:22:30.3361590Z * [new branch] gh/malfet/473/base -> origin/gh/malfet/473/base 2025-08-14T21:22:30.3362555Z * [new branch] gh/malfet/473/head -> origin/gh/malfet/473/head 2025-08-14T21:22:30.3363384Z * [new branch] gh/malfet/473/orig -> origin/gh/malfet/473/orig 2025-08-14T21:22:30.3369238Z * [new branch] gh/malfet/474/base -> origin/gh/malfet/474/base 2025-08-14T21:22:30.3373068Z * [new branch] gh/malfet/474/head -> origin/gh/malfet/474/head 2025-08-14T21:22:30.3373326Z * [new branch] gh/malfet/474/orig -> origin/gh/malfet/474/orig 2025-08-14T21:22:30.3373609Z * [new branch] gh/malfet/475/base -> origin/gh/malfet/475/base 2025-08-14T21:22:30.3373894Z * [new branch] gh/malfet/475/head -> origin/gh/malfet/475/head 2025-08-14T21:22:30.3374451Z * [new branch] gh/malfet/475/orig -> origin/gh/malfet/475/orig 2025-08-14T21:22:30.3375775Z * [new branch] gh/malfet/476/base -> origin/gh/malfet/476/base 2025-08-14T21:22:30.3376670Z * [new branch] gh/malfet/476/head -> origin/gh/malfet/476/head 2025-08-14T21:22:30.3377616Z * [new branch] gh/malfet/476/orig -> origin/gh/malfet/476/orig 2025-08-14T21:22:30.3383178Z * [new branch] gh/malfet/477/base -> origin/gh/malfet/477/base 2025-08-14T21:22:30.3383370Z * [new branch] gh/malfet/477/head -> origin/gh/malfet/477/head 2025-08-14T21:22:30.3383594Z * [new branch] gh/malfet/477/orig -> origin/gh/malfet/477/orig 2025-08-14T21:22:30.3383776Z * [new branch] gh/malfet/478/base -> origin/gh/malfet/478/base 2025-08-14T21:22:30.3383962Z * [new branch] gh/malfet/478/head -> origin/gh/malfet/478/head 2025-08-14T21:22:30.3384206Z * [new branch] gh/malfet/478/orig -> origin/gh/malfet/478/orig 2025-08-14T21:22:30.3385328Z * [new branch] gh/malfet/479/base -> origin/gh/malfet/479/base 2025-08-14T21:22:30.3386308Z * [new branch] gh/malfet/479/head -> origin/gh/malfet/479/head 2025-08-14T21:22:30.3387201Z * [new branch] gh/malfet/479/orig -> origin/gh/malfet/479/orig 2025-08-14T21:22:30.3388545Z * [new branch] gh/malfet/480/base -> origin/gh/malfet/480/base 2025-08-14T21:22:30.3389415Z * [new branch] gh/malfet/480/head -> origin/gh/malfet/480/head 2025-08-14T21:22:30.3390396Z * [new branch] gh/malfet/480/orig -> origin/gh/malfet/480/orig 2025-08-14T21:22:30.3391632Z * [new branch] gh/malfet/481/base -> origin/gh/malfet/481/base 2025-08-14T21:22:30.3392600Z * [new branch] gh/malfet/481/head -> origin/gh/malfet/481/head 2025-08-14T21:22:30.3401678Z * [new branch] gh/malfet/481/orig -> origin/gh/malfet/481/orig 2025-08-14T21:22:30.3401967Z * [new branch] gh/malfet/482/base -> origin/gh/malfet/482/base 2025-08-14T21:22:30.3402283Z * [new branch] gh/malfet/482/head -> origin/gh/malfet/482/head 2025-08-14T21:22:30.3402519Z * [new branch] gh/malfet/482/orig -> origin/gh/malfet/482/orig 2025-08-14T21:22:30.3402783Z * [new branch] gh/malfet/483/base -> origin/gh/malfet/483/base 2025-08-14T21:22:30.3403874Z * [new branch] gh/malfet/483/head -> origin/gh/malfet/483/head 2025-08-14T21:22:30.3404720Z * [new branch] gh/malfet/483/orig -> origin/gh/malfet/483/orig 2025-08-14T21:22:30.3406011Z * [new branch] gh/malfet/484/base -> origin/gh/malfet/484/base 2025-08-14T21:22:30.3407130Z * [new branch] gh/malfet/484/head -> origin/gh/malfet/484/head 2025-08-14T21:22:30.3412418Z * [new branch] gh/malfet/484/orig -> origin/gh/malfet/484/orig 2025-08-14T21:22:30.3412614Z * [new branch] gh/malfet/485/base -> origin/gh/malfet/485/base 2025-08-14T21:22:30.3412799Z * [new branch] gh/malfet/485/head -> origin/gh/malfet/485/head 2025-08-14T21:22:30.3413010Z * [new branch] gh/malfet/485/orig -> origin/gh/malfet/485/orig 2025-08-14T21:22:30.3413238Z * [new branch] gh/malfet/486/base -> origin/gh/malfet/486/base 2025-08-14T21:22:30.3413483Z * [new branch] gh/malfet/486/head -> origin/gh/malfet/486/head 2025-08-14T21:22:30.3414375Z * [new branch] gh/malfet/486/orig -> origin/gh/malfet/486/orig 2025-08-14T21:22:30.3415589Z * [new branch] gh/malfet/487/base -> origin/gh/malfet/487/base 2025-08-14T21:22:30.3416623Z * [new branch] gh/malfet/487/head -> origin/gh/malfet/487/head 2025-08-14T21:22:30.3417423Z * [new branch] gh/malfet/487/orig -> origin/gh/malfet/487/orig 2025-08-14T21:22:30.3418835Z * [new branch] gh/malfet/488/base -> origin/gh/malfet/488/base 2025-08-14T21:22:30.3419740Z * [new branch] gh/malfet/488/head -> origin/gh/malfet/488/head 2025-08-14T21:22:30.3420621Z * [new branch] gh/malfet/488/orig -> origin/gh/malfet/488/orig 2025-08-14T21:22:30.3421819Z * [new branch] gh/malfet/489/base -> origin/gh/malfet/489/base 2025-08-14T21:22:30.3422908Z * [new branch] gh/malfet/489/head -> origin/gh/malfet/489/head 2025-08-14T21:22:30.3428294Z * [new branch] gh/malfet/489/orig -> origin/gh/malfet/489/orig 2025-08-14T21:22:30.3429534Z * [new branch] gh/malfet/490/base -> origin/gh/malfet/490/base 2025-08-14T21:22:30.3430459Z * [new branch] gh/malfet/490/head -> origin/gh/malfet/490/head 2025-08-14T21:22:30.3431419Z * [new branch] gh/malfet/490/orig -> origin/gh/malfet/490/orig 2025-08-14T21:22:30.3432699Z * [new branch] gh/malfet/64/base -> origin/gh/malfet/64/base 2025-08-14T21:22:30.3433601Z * [new branch] gh/malfet/64/head -> origin/gh/malfet/64/head 2025-08-14T21:22:30.3446547Z * [new branch] gh/manuelcandales/10/base -> origin/gh/manuelcandales/10/base 2025-08-14T21:22:30.3446843Z * [new branch] gh/manuelcandales/10/head -> origin/gh/manuelcandales/10/head 2025-08-14T21:22:30.3447071Z * [new branch] gh/manuelcandales/10/orig -> origin/gh/manuelcandales/10/orig 2025-08-14T21:22:30.3447282Z * [new branch] gh/manuelcandales/9/base -> origin/gh/manuelcandales/9/base 2025-08-14T21:22:30.3447484Z * [new branch] gh/manuelcandales/9/head -> origin/gh/manuelcandales/9/head 2025-08-14T21:22:30.3447768Z * [new branch] gh/manuelcandales/9/orig -> origin/gh/manuelcandales/9/orig 2025-08-14T21:22:30.3447947Z * [new branch] gh/markkm/1/base -> origin/gh/markkm/1/base 2025-08-14T21:22:30.3448144Z * [new branch] gh/masnesral/204/base -> origin/gh/masnesral/204/base 2025-08-14T21:22:30.3448321Z * [new branch] gh/masnesral/204/head -> origin/gh/masnesral/204/head 2025-08-14T21:22:30.3448503Z * [new branch] gh/masnesral/204/orig -> origin/gh/masnesral/204/orig 2025-08-14T21:22:30.3449074Z * [new branch] gh/masnesral/223/base -> origin/gh/masnesral/223/base 2025-08-14T21:22:30.3449265Z * [new branch] gh/masnesral/223/head -> origin/gh/masnesral/223/head 2025-08-14T21:22:30.3449448Z * [new branch] gh/masnesral/223/orig -> origin/gh/masnesral/223/orig 2025-08-14T21:22:30.3450691Z * [new branch] gh/masnesral/224/base -> origin/gh/masnesral/224/base 2025-08-14T21:22:30.3451768Z * [new branch] gh/masnesral/224/head -> origin/gh/masnesral/224/head 2025-08-14T21:22:30.3461130Z * [new branch] gh/masnesral/224/orig -> origin/gh/masnesral/224/orig 2025-08-14T21:22:30.3462185Z * [new branch] gh/masnesral/225/base -> origin/gh/masnesral/225/base 2025-08-14T21:22:30.3463201Z * [new branch] gh/masnesral/225/head -> origin/gh/masnesral/225/head 2025-08-14T21:22:30.3464128Z * [new branch] gh/masnesral/225/orig -> origin/gh/masnesral/225/orig 2025-08-14T21:22:30.3465385Z * [new branch] gh/masnesral/226/base -> origin/gh/masnesral/226/base 2025-08-14T21:22:30.3470195Z * [new branch] gh/masnesral/226/head -> origin/gh/masnesral/226/head 2025-08-14T21:22:30.3470373Z * [new branch] gh/masnesral/226/orig -> origin/gh/masnesral/226/orig 2025-08-14T21:22:30.3470565Z * [new branch] gh/masnesral/227/base -> origin/gh/masnesral/227/base 2025-08-14T21:22:30.3470866Z * [new branch] gh/masnesral/227/head -> origin/gh/masnesral/227/head 2025-08-14T21:22:30.3471261Z * [new branch] gh/masnesral/227/orig -> origin/gh/masnesral/227/orig 2025-08-14T21:22:30.3474441Z * [new branch] gh/masnesral/228/base -> origin/gh/masnesral/228/base 2025-08-14T21:22:30.3474674Z * [new branch] gh/masnesral/228/head -> origin/gh/masnesral/228/head 2025-08-14T21:22:30.3474900Z * [new branch] gh/masnesral/228/orig -> origin/gh/masnesral/228/orig 2025-08-14T21:22:30.3475790Z * [new branch] gh/masnesral/229/base -> origin/gh/masnesral/229/base 2025-08-14T21:22:30.3476775Z * [new branch] gh/masnesral/229/head -> origin/gh/masnesral/229/head 2025-08-14T21:22:30.3477750Z * [new branch] gh/masnesral/229/orig -> origin/gh/masnesral/229/orig 2025-08-14T21:22:30.3478869Z * [new branch] gh/masnesral/230/base -> origin/gh/masnesral/230/base 2025-08-14T21:22:30.3479884Z * [new branch] gh/masnesral/230/head -> origin/gh/masnesral/230/head 2025-08-14T21:22:30.3482809Z * [new branch] gh/masnesral/230/orig -> origin/gh/masnesral/230/orig 2025-08-14T21:22:30.3482994Z * [new branch] gh/masnesral/231/base -> origin/gh/masnesral/231/base 2025-08-14T21:22:30.3483553Z * [new branch] gh/masnesral/231/head -> origin/gh/masnesral/231/head 2025-08-14T21:22:30.3484515Z * [new branch] gh/masnesral/231/orig -> origin/gh/masnesral/231/orig 2025-08-14T21:22:30.3485819Z * [new branch] gh/masnesral/232/base -> origin/gh/masnesral/232/base 2025-08-14T21:22:30.3486772Z * [new branch] gh/masnesral/232/head -> origin/gh/masnesral/232/head 2025-08-14T21:22:30.3487840Z * [new branch] gh/masnesral/232/orig -> origin/gh/masnesral/232/orig 2025-08-14T21:22:30.3488945Z * [new branch] gh/masnesral/233/base -> origin/gh/masnesral/233/base 2025-08-14T21:22:30.3489876Z * [new branch] gh/masnesral/233/head -> origin/gh/masnesral/233/head 2025-08-14T21:22:30.3490791Z * [new branch] gh/masnesral/233/orig -> origin/gh/masnesral/233/orig 2025-08-14T21:22:30.3492073Z * [new branch] gh/masnesral/234/base -> origin/gh/masnesral/234/base 2025-08-14T21:22:30.3493098Z * [new branch] gh/masnesral/234/head -> origin/gh/masnesral/234/head 2025-08-14T21:22:30.3494028Z * [new branch] gh/masnesral/234/orig -> origin/gh/masnesral/234/orig 2025-08-14T21:22:30.3503244Z * [new branch] gh/masnesral/235/base -> origin/gh/masnesral/235/base 2025-08-14T21:22:30.3503459Z * [new branch] gh/masnesral/235/head -> origin/gh/masnesral/235/head 2025-08-14T21:22:30.3503943Z * [new branch] gh/masnesral/235/orig -> origin/gh/masnesral/235/orig 2025-08-14T21:22:30.3505220Z * [new branch] gh/masnesral/236/base -> origin/gh/masnesral/236/base 2025-08-14T21:22:30.3506166Z * [new branch] gh/masnesral/236/head -> origin/gh/masnesral/236/head 2025-08-14T21:22:30.3507043Z * [new branch] gh/masnesral/236/orig -> origin/gh/masnesral/236/orig 2025-08-14T21:22:30.3508319Z * [new branch] gh/masnesral/34/base -> origin/gh/masnesral/34/base 2025-08-14T21:22:30.3513854Z * [new branch] gh/mhorowitz/0/base -> origin/gh/mhorowitz/0/base 2025-08-14T21:22:30.3514029Z * [new branch] gh/mhorowitz/0/head -> origin/gh/mhorowitz/0/head 2025-08-14T21:22:30.3514204Z * [new branch] gh/mhorowitz/1/base -> origin/gh/mhorowitz/1/base 2025-08-14T21:22:30.3514368Z * [new branch] gh/mhorowitz/1/head -> origin/gh/mhorowitz/1/head 2025-08-14T21:22:30.3514541Z * [new branch] gh/mhorowitz/2/base -> origin/gh/mhorowitz/2/base 2025-08-14T21:22:30.3515344Z * [new branch] gh/mhorowitz/2/head -> origin/gh/mhorowitz/2/head 2025-08-14T21:22:30.3516606Z * [new branch] gh/mhorowitz/3/base -> origin/gh/mhorowitz/3/base 2025-08-14T21:22:30.3517469Z * [new branch] gh/mhorowitz/3/head -> origin/gh/mhorowitz/3/head 2025-08-14T21:22:30.3518558Z * [new branch] gh/mhorowitz/4/base -> origin/gh/mhorowitz/4/base 2025-08-14T21:22:30.3519476Z * [new branch] gh/mhorowitz/4/head -> origin/gh/mhorowitz/4/head 2025-08-14T21:22:30.3520622Z * [new branch] gh/mhorowitz/5/base -> origin/gh/mhorowitz/5/base 2025-08-14T21:22:30.3521567Z * [new branch] gh/mhorowitz/5/head -> origin/gh/mhorowitz/5/head 2025-08-14T21:22:30.3522797Z * [new branch] gh/mhorowitz/6/base -> origin/gh/mhorowitz/6/base 2025-08-14T21:22:30.3523675Z * [new branch] gh/mhorowitz/6/head -> origin/gh/mhorowitz/6/head 2025-08-14T21:22:30.3529746Z * [new branch] gh/mikaylagawarecki/234/base -> origin/gh/mikaylagawarecki/234/base 2025-08-14T21:22:30.3530557Z * [new branch] gh/mikaylagawarecki/234/head -> origin/gh/mikaylagawarecki/234/head 2025-08-14T21:22:30.3531813Z * [new branch] gh/mikaylagawarecki/235/base -> origin/gh/mikaylagawarecki/235/base 2025-08-14T21:22:30.3532714Z * [new branch] gh/mikaylagawarecki/235/head -> origin/gh/mikaylagawarecki/235/head 2025-08-14T21:22:30.3533896Z * [new branch] gh/mikaylagawarecki/236/base -> origin/gh/mikaylagawarecki/236/base 2025-08-14T21:22:30.3534889Z * [new branch] gh/mikaylagawarecki/236/head -> origin/gh/mikaylagawarecki/236/head 2025-08-14T21:22:30.3535943Z * [new branch] gh/mikaylagawarecki/237/base -> origin/gh/mikaylagawarecki/237/base 2025-08-14T21:22:30.3536814Z * [new branch] gh/mikaylagawarecki/237/head -> origin/gh/mikaylagawarecki/237/head 2025-08-14T21:22:30.3538148Z * [new branch] gh/mikaylagawarecki/238/base -> origin/gh/mikaylagawarecki/238/base 2025-08-14T21:22:30.3542950Z * [new branch] gh/mikaylagawarecki/238/head -> origin/gh/mikaylagawarecki/238/head 2025-08-14T21:22:30.3543233Z * [new branch] gh/mikaylagawarecki/313/base -> origin/gh/mikaylagawarecki/313/base 2025-08-14T21:22:30.3543460Z * [new branch] gh/mikaylagawarecki/313/head -> origin/gh/mikaylagawarecki/313/head 2025-08-14T21:22:30.3543669Z * [new branch] gh/mikaylagawarecki/313/orig -> origin/gh/mikaylagawarecki/313/orig 2025-08-14T21:22:30.3544109Z * [new branch] gh/mikaylagawarecki/317/base -> origin/gh/mikaylagawarecki/317/base 2025-08-14T21:22:30.3545722Z * [new branch] gh/mikaylagawarecki/317/head -> origin/gh/mikaylagawarecki/317/head 2025-08-14T21:22:30.3546452Z * [new branch] gh/mikaylagawarecki/317/orig -> origin/gh/mikaylagawarecki/317/orig 2025-08-14T21:22:30.3547711Z * [new branch] gh/mikaylagawarecki/318/base -> origin/gh/mikaylagawarecki/318/base 2025-08-14T21:22:30.3548633Z * [new branch] gh/mikaylagawarecki/318/head -> origin/gh/mikaylagawarecki/318/head 2025-08-14T21:22:30.3550117Z * [new branch] gh/mikaylagawarecki/318/orig -> origin/gh/mikaylagawarecki/318/orig 2025-08-14T21:22:30.3551409Z * [new branch] gh/mikaylagawarecki/319/base -> origin/gh/mikaylagawarecki/319/base 2025-08-14T21:22:30.3552361Z * [new branch] gh/mikaylagawarecki/319/head -> origin/gh/mikaylagawarecki/319/head 2025-08-14T21:22:30.3561379Z * [new branch] gh/mikaylagawarecki/319/orig -> origin/gh/mikaylagawarecki/319/orig 2025-08-14T21:22:30.3561654Z * [new branch] gh/mikaylagawarecki/320/base -> origin/gh/mikaylagawarecki/320/base 2025-08-14T21:22:30.3561922Z * [new branch] gh/mikaylagawarecki/320/head -> origin/gh/mikaylagawarecki/320/head 2025-08-14T21:22:30.3562304Z * [new branch] gh/mikaylagawarecki/320/orig -> origin/gh/mikaylagawarecki/320/orig 2025-08-14T21:22:30.3562578Z * [new branch] gh/mikaylagawarecki/321/base -> origin/gh/mikaylagawarecki/321/base 2025-08-14T21:22:30.3563166Z * [new branch] gh/mikaylagawarecki/321/head -> origin/gh/mikaylagawarecki/321/head 2025-08-14T21:22:30.3564372Z * [new branch] gh/mikaylagawarecki/321/orig -> origin/gh/mikaylagawarecki/321/orig 2025-08-14T21:22:30.3565660Z * [new branch] gh/mikaylagawarecki/322/base -> origin/gh/mikaylagawarecki/322/base 2025-08-14T21:22:30.3566546Z * [new branch] gh/mikaylagawarecki/322/head -> origin/gh/mikaylagawarecki/322/head 2025-08-14T21:22:30.3571844Z * [new branch] gh/mikaylagawarecki/322/orig -> origin/gh/mikaylagawarecki/322/orig 2025-08-14T21:22:30.3572054Z * [new branch] gh/mikaylagawarecki/323/base -> origin/gh/mikaylagawarecki/323/base 2025-08-14T21:22:30.3572272Z * [new branch] gh/mikaylagawarecki/323/head -> origin/gh/mikaylagawarecki/323/head 2025-08-14T21:22:30.3572470Z * [new branch] gh/mikaylagawarecki/323/orig -> origin/gh/mikaylagawarecki/323/orig 2025-08-14T21:22:30.3572669Z * [new branch] gh/mikaylagawarecki/324/base -> origin/gh/mikaylagawarecki/324/base 2025-08-14T21:22:30.3572931Z * [new branch] gh/mikaylagawarecki/324/head -> origin/gh/mikaylagawarecki/324/head 2025-08-14T21:22:30.3573913Z * [new branch] gh/mikaylagawarecki/324/orig -> origin/gh/mikaylagawarecki/324/orig 2025-08-14T21:22:30.3575071Z * [new branch] gh/mikaylagawarecki/325/base -> origin/gh/mikaylagawarecki/325/base 2025-08-14T21:22:30.3576074Z * [new branch] gh/mikaylagawarecki/325/head -> origin/gh/mikaylagawarecki/325/head 2025-08-14T21:22:30.3576924Z * [new branch] gh/mikaylagawarecki/325/orig -> origin/gh/mikaylagawarecki/325/orig 2025-08-14T21:22:30.3578414Z * [new branch] gh/mikaylagawarecki/326/base -> origin/gh/mikaylagawarecki/326/base 2025-08-14T21:22:30.3579330Z * [new branch] gh/mikaylagawarecki/326/head -> origin/gh/mikaylagawarecki/326/head 2025-08-14T21:22:30.3580269Z * [new branch] gh/mikaylagawarecki/326/orig -> origin/gh/mikaylagawarecki/326/orig 2025-08-14T21:22:30.3581768Z * [new branch] gh/mikaylagawarecki/327/base -> origin/gh/mikaylagawarecki/327/base 2025-08-14T21:22:30.3587373Z * [new branch] gh/mikaylagawarecki/327/head -> origin/gh/mikaylagawarecki/327/head 2025-08-14T21:22:30.3588105Z * [new branch] gh/mikaylagawarecki/327/orig -> origin/gh/mikaylagawarecki/327/orig 2025-08-14T21:22:30.3589704Z * [new branch] gh/mikaylagawarecki/328/base -> origin/gh/mikaylagawarecki/328/base 2025-08-14T21:22:30.3590675Z * [new branch] gh/mikaylagawarecki/328/head -> origin/gh/mikaylagawarecki/328/head 2025-08-14T21:22:30.3591597Z * [new branch] gh/mikaylagawarecki/328/orig -> origin/gh/mikaylagawarecki/328/orig 2025-08-14T21:22:30.3592886Z * [new branch] gh/mikaylagawarecki/329/base -> origin/gh/mikaylagawarecki/329/base 2025-08-14T21:22:30.3593807Z * [new branch] gh/mikaylagawarecki/329/head -> origin/gh/mikaylagawarecki/329/head 2025-08-14T21:22:30.3594713Z * [new branch] gh/mikaylagawarecki/329/orig -> origin/gh/mikaylagawarecki/329/orig 2025-08-14T21:22:30.3596139Z * [new branch] gh/mikaylagawarecki/330/base -> origin/gh/mikaylagawarecki/330/base 2025-08-14T21:22:30.3600930Z * [new branch] gh/mikaylagawarecki/330/head -> origin/gh/mikaylagawarecki/330/head 2025-08-14T21:22:30.3601249Z * [new branch] gh/mikaylagawarecki/330/orig -> origin/gh/mikaylagawarecki/330/orig 2025-08-14T21:22:30.3601523Z * [new branch] gh/mikaylagawarecki/331/base -> origin/gh/mikaylagawarecki/331/base 2025-08-14T21:22:30.3601820Z * [new branch] gh/mikaylagawarecki/331/head -> origin/gh/mikaylagawarecki/331/head 2025-08-14T21:22:30.3602024Z * [new branch] gh/mikaylagawarecki/331/orig -> origin/gh/mikaylagawarecki/331/orig 2025-08-14T21:22:30.3603046Z * [new branch] gh/mikaylagawarecki/332/base -> origin/gh/mikaylagawarecki/332/base 2025-08-14T21:22:30.3603920Z * [new branch] gh/mikaylagawarecki/332/head -> origin/gh/mikaylagawarecki/332/head 2025-08-14T21:22:30.3604848Z * [new branch] gh/mikaylagawarecki/332/orig -> origin/gh/mikaylagawarecki/332/orig 2025-08-14T21:22:30.3606018Z * [new branch] gh/mikaylagawarecki/333/base -> origin/gh/mikaylagawarecki/333/base 2025-08-14T21:22:30.3606960Z * [new branch] gh/mikaylagawarecki/333/head -> origin/gh/mikaylagawarecki/333/head 2025-08-14T21:22:30.3607875Z * [new branch] gh/mikaylagawarecki/333/orig -> origin/gh/mikaylagawarecki/333/orig 2025-08-14T21:22:30.3609625Z * [new branch] gh/mikaylagawarecki/334/base -> origin/gh/mikaylagawarecki/334/base 2025-08-14T21:22:30.3610539Z * [new branch] gh/mikaylagawarecki/334/head -> origin/gh/mikaylagawarecki/334/head 2025-08-14T21:22:30.3620118Z * [new branch] gh/mikaylagawarecki/334/orig -> origin/gh/mikaylagawarecki/334/orig 2025-08-14T21:22:30.3621726Z * [new branch] gh/mlazos/1/base -> origin/gh/mlazos/1/base 2025-08-14T21:22:30.3622741Z * [new branch] gh/mlazos/1/head -> origin/gh/mlazos/1/head 2025-08-14T21:22:30.3623650Z * [new branch] gh/mlazos/1/orig -> origin/gh/mlazos/1/orig 2025-08-14T21:22:30.3625098Z * [new branch] gh/mlazos/10/base -> origin/gh/mlazos/10/base 2025-08-14T21:22:30.3633821Z * [new branch] gh/mlazos/10/head -> origin/gh/mlazos/10/head 2025-08-14T21:22:30.3634091Z * [new branch] gh/mlazos/10/orig -> origin/gh/mlazos/10/orig 2025-08-14T21:22:30.3634310Z * [new branch] gh/mlazos/11/base -> origin/gh/mlazos/11/base 2025-08-14T21:22:30.3634483Z * [new branch] gh/mlazos/11/head -> origin/gh/mlazos/11/head 2025-08-14T21:22:30.3634652Z * [new branch] gh/mlazos/11/orig -> origin/gh/mlazos/11/orig 2025-08-14T21:22:30.3634809Z * [new branch] gh/mlazos/12/base -> origin/gh/mlazos/12/base 2025-08-14T21:22:30.3634971Z * [new branch] gh/mlazos/12/head -> origin/gh/mlazos/12/head 2025-08-14T21:22:30.3635125Z * [new branch] gh/mlazos/12/orig -> origin/gh/mlazos/12/orig 2025-08-14T21:22:30.3635606Z * [new branch] gh/mlazos/13/base -> origin/gh/mlazos/13/base 2025-08-14T21:22:30.3636636Z * [new branch] gh/mlazos/13/head -> origin/gh/mlazos/13/head 2025-08-14T21:22:30.3637538Z * [new branch] gh/mlazos/13/orig -> origin/gh/mlazos/13/orig 2025-08-14T21:22:30.3638724Z * [new branch] gh/mlazos/2/base -> origin/gh/mlazos/2/base 2025-08-14T21:22:30.3639635Z * [new branch] gh/mlazos/2/head -> origin/gh/mlazos/2/head 2025-08-14T21:22:30.3648166Z * [new branch] gh/mlazos/2/orig -> origin/gh/mlazos/2/orig 2025-08-14T21:22:30.3648370Z * [new branch] gh/mlazos/3/base -> origin/gh/mlazos/3/base 2025-08-14T21:22:30.3648571Z * [new branch] gh/mlazos/3/head -> origin/gh/mlazos/3/head 2025-08-14T21:22:30.3649022Z * [new branch] gh/mlazos/3/orig -> origin/gh/mlazos/3/orig 2025-08-14T21:22:30.3649181Z * [new branch] gh/mlazos/4/base -> origin/gh/mlazos/4/base 2025-08-14T21:22:30.3649343Z * [new branch] gh/mlazos/4/head -> origin/gh/mlazos/4/head 2025-08-14T21:22:30.3649502Z * [new branch] gh/mlazos/4/orig -> origin/gh/mlazos/4/orig 2025-08-14T21:22:30.3649788Z * [new branch] gh/mlazos/5/base -> origin/gh/mlazos/5/base 2025-08-14T21:22:30.3649943Z * [new branch] gh/mlazos/5/head -> origin/gh/mlazos/5/head 2025-08-14T21:22:30.3650766Z * [new branch] gh/mlazos/5/orig -> origin/gh/mlazos/5/orig 2025-08-14T21:22:30.3652075Z * [new branch] gh/mlazos/6/base -> origin/gh/mlazos/6/base 2025-08-14T21:22:30.3652938Z * [new branch] gh/mlazos/6/head -> origin/gh/mlazos/6/head 2025-08-14T21:22:30.3653853Z * [new branch] gh/mlazos/6/orig -> origin/gh/mlazos/6/orig 2025-08-14T21:22:30.3661699Z * [new branch] gh/mlazos/7/base -> origin/gh/mlazos/7/base 2025-08-14T21:22:30.3662671Z * [new branch] gh/mlazos/7/head -> origin/gh/mlazos/7/head 2025-08-14T21:22:30.3663575Z * [new branch] gh/mlazos/7/orig -> origin/gh/mlazos/7/orig 2025-08-14T21:22:30.3664783Z * [new branch] gh/mlazos/8/base -> origin/gh/mlazos/8/base 2025-08-14T21:22:30.3665912Z * [new branch] gh/mlazos/8/head -> origin/gh/mlazos/8/head 2025-08-14T21:22:30.3666867Z * [new branch] gh/mlazos/8/orig -> origin/gh/mlazos/8/orig 2025-08-14T21:22:30.3668225Z * [new branch] gh/mlazos/9/base -> origin/gh/mlazos/9/base 2025-08-14T21:22:30.3675135Z * [new branch] gh/mlazos/9/head -> origin/gh/mlazos/9/head 2025-08-14T21:22:30.3675334Z * [new branch] gh/mlazos/9/orig -> origin/gh/mlazos/9/orig 2025-08-14T21:22:30.3675540Z * [new branch] gh/mrmiywj/1/base -> origin/gh/mrmiywj/1/base 2025-08-14T21:22:30.3675739Z * [new branch] gh/mrmiywj/1/head -> origin/gh/mrmiywj/1/head 2025-08-14T21:22:30.3675966Z * [new branch] gh/muchulee8/62/base -> origin/gh/muchulee8/62/base 2025-08-14T21:22:30.3676263Z * [new branch] gh/muchulee8/62/head -> origin/gh/muchulee8/62/head 2025-08-14T21:22:30.3676750Z * [new branch] gh/muchulee8/62/orig -> origin/gh/muchulee8/62/orig 2025-08-14T21:22:30.3678273Z * [new branch] gh/muchulee8/63/base -> origin/gh/muchulee8/63/base 2025-08-14T21:22:30.3678912Z * [new branch] gh/muchulee8/63/head -> origin/gh/muchulee8/63/head 2025-08-14T21:22:30.3679879Z * [new branch] gh/muchulee8/63/orig -> origin/gh/muchulee8/63/orig 2025-08-14T21:22:30.3681467Z * [new branch] gh/muchulee8/64/base -> origin/gh/muchulee8/64/base 2025-08-14T21:22:30.3682373Z * [new branch] gh/muchulee8/64/head -> origin/gh/muchulee8/64/head 2025-08-14T21:22:30.3689520Z * [new branch] gh/muchulee8/64/orig -> origin/gh/muchulee8/64/orig 2025-08-14T21:22:30.3689743Z * [new branch] gh/muchulee8/65/base -> origin/gh/muchulee8/65/base 2025-08-14T21:22:30.3690030Z * [new branch] gh/muchulee8/65/head -> origin/gh/muchulee8/65/head 2025-08-14T21:22:30.3691098Z * [new branch] gh/muchulee8/65/orig -> origin/gh/muchulee8/65/orig 2025-08-14T21:22:30.3692586Z * [new branch] gh/oulgen/35/base -> origin/gh/oulgen/35/base 2025-08-14T21:22:30.3693869Z * [new branch] gh/oulgen/35/head -> origin/gh/oulgen/35/head 2025-08-14T21:22:30.3694743Z * [new branch] gh/oulgen/35/orig -> origin/gh/oulgen/35/orig 2025-08-14T21:22:30.3696121Z * [new branch] gh/oulgen/44/base -> origin/gh/oulgen/44/base 2025-08-14T21:22:30.3697086Z * [new branch] gh/oulgen/44/head -> origin/gh/oulgen/44/head 2025-08-14T21:22:30.3702077Z * [new branch] gh/oulgen/44/orig -> origin/gh/oulgen/44/orig 2025-08-14T21:22:30.3702267Z * [new branch] gh/oulgen/45/base -> origin/gh/oulgen/45/base 2025-08-14T21:22:30.3702495Z * [new branch] gh/oulgen/45/head -> origin/gh/oulgen/45/head 2025-08-14T21:22:30.3702660Z * [new branch] gh/oulgen/45/orig -> origin/gh/oulgen/45/orig 2025-08-14T21:22:30.3702815Z * [new branch] gh/oulgen/46/base -> origin/gh/oulgen/46/base 2025-08-14T21:22:30.3703790Z * [new branch] gh/oulgen/46/head -> origin/gh/oulgen/46/head 2025-08-14T21:22:30.3704688Z * [new branch] gh/oulgen/46/orig -> origin/gh/oulgen/46/orig 2025-08-14T21:22:30.3705965Z * [new branch] gh/oulgen/47/base -> origin/gh/oulgen/47/base 2025-08-14T21:22:30.3706886Z * [new branch] gh/oulgen/47/head -> origin/gh/oulgen/47/head 2025-08-14T21:22:30.3707790Z * [new branch] gh/oulgen/47/orig -> origin/gh/oulgen/47/orig 2025-08-14T21:22:30.3709582Z * [new branch] gh/pearu/108/base -> origin/gh/pearu/108/base 2025-08-14T21:22:30.3710596Z * [new branch] gh/pearu/108/head -> origin/gh/pearu/108/head 2025-08-14T21:22:30.3711593Z * [new branch] gh/pearu/108/orig -> origin/gh/pearu/108/orig 2025-08-14T21:22:30.3720928Z * [new branch] gh/pearu/56/base -> origin/gh/pearu/56/base 2025-08-14T21:22:30.3721183Z * [new branch] gh/pearu/56/head -> origin/gh/pearu/56/head 2025-08-14T21:22:30.3721387Z * [new branch] gh/pearu/56/orig -> origin/gh/pearu/56/orig 2025-08-14T21:22:30.3721574Z * [new branch] gh/pearu/97/base -> origin/gh/pearu/97/base 2025-08-14T21:22:30.3722199Z * [new branch] gh/pearu/97/head -> origin/gh/pearu/97/head 2025-08-14T21:22:30.3723174Z * [new branch] gh/pearu/97/orig -> origin/gh/pearu/97/orig 2025-08-14T21:22:30.3724720Z * [new branch] gh/qqaatw/29/base -> origin/gh/qqaatw/29/base 2025-08-14T21:22:30.3725578Z * [new branch] gh/qqaatw/29/head -> origin/gh/qqaatw/29/head 2025-08-14T21:22:30.3726556Z * [new branch] gh/qqaatw/29/orig -> origin/gh/qqaatw/29/orig 2025-08-14T21:22:30.3731543Z * [new branch] gh/raymo/cleanup-dynamo-logging -> origin/gh/raymo/cleanup-dynamo-logging 2025-08-14T21:22:30.3731796Z * [new branch] gh/raymo/refresh-script -> origin/gh/raymo/refresh-script 2025-08-14T21:22:30.3732011Z * [new branch] gh/rec/141/base -> origin/gh/rec/141/base 2025-08-14T21:22:30.3732184Z * [new branch] gh/rec/141/head -> origin/gh/rec/141/head 2025-08-14T21:22:30.3732694Z * [new branch] gh/rec/153/base -> origin/gh/rec/153/base 2025-08-14T21:22:30.3733655Z * [new branch] gh/rec/153/head -> origin/gh/rec/153/head 2025-08-14T21:22:30.3734580Z * [new branch] gh/rec/153/orig -> origin/gh/rec/153/orig 2025-08-14T21:22:30.3735787Z * [new branch] gh/rec/154/base -> origin/gh/rec/154/base 2025-08-14T21:22:30.3736689Z * [new branch] gh/rec/154/head -> origin/gh/rec/154/head 2025-08-14T21:22:30.3737598Z * [new branch] gh/rec/154/orig -> origin/gh/rec/154/orig 2025-08-14T21:22:30.3738829Z * [new branch] gh/rec/156/base -> origin/gh/rec/156/base 2025-08-14T21:22:30.3739791Z * [new branch] gh/rec/156/head -> origin/gh/rec/156/head 2025-08-14T21:22:30.3740700Z * [new branch] gh/rec/156/orig -> origin/gh/rec/156/orig 2025-08-14T21:22:30.3746433Z * [new branch] gh/rec/158/base -> origin/gh/rec/158/base 2025-08-14T21:22:30.3747323Z * [new branch] gh/rec/158/head -> origin/gh/rec/158/head 2025-08-14T21:22:30.3748289Z * [new branch] gh/rec/158/orig -> origin/gh/rec/158/orig 2025-08-14T21:22:30.3750087Z * [new branch] gh/rec/159/base -> origin/gh/rec/159/base 2025-08-14T21:22:30.3750987Z * [new branch] gh/rec/159/head -> origin/gh/rec/159/head 2025-08-14T21:22:30.3752208Z * [new branch] gh/rec/160/base -> origin/gh/rec/160/base 2025-08-14T21:22:30.3753136Z * [new branch] gh/rec/160/head -> origin/gh/rec/160/head 2025-08-14T21:22:30.3754074Z * [new branch] gh/rec/160/orig -> origin/gh/rec/160/orig 2025-08-14T21:22:30.3755334Z * [new branch] gh/rec/161/base -> origin/gh/rec/161/base 2025-08-14T21:22:30.3760161Z * [new branch] gh/rec/161/head -> origin/gh/rec/161/head 2025-08-14T21:22:30.3760349Z * [new branch] gh/rec/161/orig -> origin/gh/rec/161/orig 2025-08-14T21:22:30.3760512Z * [new branch] gh/rec/162/base -> origin/gh/rec/162/base 2025-08-14T21:22:30.3760702Z * [new branch] gh/rec/162/head -> origin/gh/rec/162/head 2025-08-14T21:22:30.3760856Z * [new branch] gh/rec/162/orig -> origin/gh/rec/162/orig 2025-08-14T21:22:30.3764467Z * [new branch] gh/rec/163/base -> origin/gh/rec/163/base 2025-08-14T21:22:30.3764663Z * [new branch] gh/rec/163/head -> origin/gh/rec/163/head 2025-08-14T21:22:30.3764845Z * [new branch] gh/rec/163/orig -> origin/gh/rec/163/orig 2025-08-14T21:22:30.3765142Z * [new branch] gh/rec/164/base -> origin/gh/rec/164/base 2025-08-14T21:22:30.3766089Z * [new branch] gh/rec/164/head -> origin/gh/rec/164/head 2025-08-14T21:22:30.3767000Z * [new branch] gh/rec/164/orig -> origin/gh/rec/164/orig 2025-08-14T21:22:30.3768696Z * [new branch] gh/robert-hardwick/1/base -> origin/gh/robert-hardwick/1/base 2025-08-14T21:22:30.3769578Z * [new branch] gh/robert-hardwick/1/head -> origin/gh/robert-hardwick/1/head 2025-08-14T21:22:30.3770648Z * [new branch] gh/robert-hardwick/1/orig -> origin/gh/robert-hardwick/1/orig 2025-08-14T21:22:30.3780339Z * [new branch] gh/robert-hardwick/2/base -> origin/gh/robert-hardwick/2/base 2025-08-14T21:22:30.3781216Z * [new branch] gh/robert-hardwick/2/head -> origin/gh/robert-hardwick/2/head 2025-08-14T21:22:30.3782166Z * [new branch] gh/robert-hardwick/2/orig -> origin/gh/robert-hardwick/2/orig 2025-08-14T21:22:30.3783381Z * [new branch] gh/robert-hardwick/3/base -> origin/gh/robert-hardwick/3/base 2025-08-14T21:22:30.3784294Z * [new branch] gh/robert-hardwick/3/head -> origin/gh/robert-hardwick/3/head 2025-08-14T21:22:30.3793327Z * [new branch] gh/robert-hardwick/3/orig -> origin/gh/robert-hardwick/3/orig 2025-08-14T21:22:30.3793576Z * [new branch] gh/robert-hardwick/4/base -> origin/gh/robert-hardwick/4/base 2025-08-14T21:22:30.3793820Z * [new branch] gh/robert-hardwick/4/head -> origin/gh/robert-hardwick/4/head 2025-08-14T21:22:30.3794046Z * [new branch] gh/robert-hardwick/4/orig -> origin/gh/robert-hardwick/4/orig 2025-08-14T21:22:30.3794243Z * [new branch] gh/rtimpe/1/base -> origin/gh/rtimpe/1/base 2025-08-14T21:22:30.3794451Z * [new branch] gh/rtimpe/1/head -> origin/gh/rtimpe/1/head 2025-08-14T21:22:30.3795149Z * [new branch] gh/rtimpe/10/base -> origin/gh/rtimpe/10/base 2025-08-14T21:22:30.3796073Z * [new branch] gh/rtimpe/10/head -> origin/gh/rtimpe/10/head 2025-08-14T21:22:30.3796973Z * [new branch] gh/rtimpe/10/orig -> origin/gh/rtimpe/10/orig 2025-08-14T21:22:30.3798235Z * [new branch] gh/rtimpe/11/base -> origin/gh/rtimpe/11/base 2025-08-14T21:22:30.3807552Z * [new branch] gh/rtimpe/11/head -> origin/gh/rtimpe/11/head 2025-08-14T21:22:30.3807851Z * [new branch] gh/rtimpe/11/orig -> origin/gh/rtimpe/11/orig 2025-08-14T21:22:30.3808051Z * [new branch] gh/rtimpe/12/base -> origin/gh/rtimpe/12/base 2025-08-14T21:22:30.3808242Z * [new branch] gh/rtimpe/12/head -> origin/gh/rtimpe/12/head 2025-08-14T21:22:30.3808438Z * [new branch] gh/rtimpe/12/orig -> origin/gh/rtimpe/12/orig 2025-08-14T21:22:30.3808629Z * [new branch] gh/rtimpe/2/base -> origin/gh/rtimpe/2/base 2025-08-14T21:22:30.3808818Z * [new branch] gh/rtimpe/2/head -> origin/gh/rtimpe/2/head 2025-08-14T21:22:30.3809018Z * [new branch] gh/rtimpe/3/base -> origin/gh/rtimpe/3/base 2025-08-14T21:22:30.3809209Z * [new branch] gh/rtimpe/3/head -> origin/gh/rtimpe/3/head 2025-08-14T21:22:30.3809414Z * [new branch] gh/rtimpe/4/base -> origin/gh/rtimpe/4/base 2025-08-14T21:22:30.3809912Z * [new branch] gh/rtimpe/4/head -> origin/gh/rtimpe/4/head 2025-08-14T21:22:30.3811132Z * [new branch] gh/rtimpe/5/base -> origin/gh/rtimpe/5/base 2025-08-14T21:22:30.3812027Z * [new branch] gh/rtimpe/5/head -> origin/gh/rtimpe/5/head 2025-08-14T21:22:30.3812947Z * [new branch] gh/rtimpe/5/orig -> origin/gh/rtimpe/5/orig 2025-08-14T21:22:30.3818629Z * [new branch] gh/rtimpe/6/base -> origin/gh/rtimpe/6/base 2025-08-14T21:22:30.3819572Z * [new branch] gh/rtimpe/6/head -> origin/gh/rtimpe/6/head 2025-08-14T21:22:30.3820844Z * [new branch] gh/rtimpe/6/orig -> origin/gh/rtimpe/6/orig 2025-08-14T21:22:30.3822093Z * [new branch] gh/rtimpe/7/base -> origin/gh/rtimpe/7/base 2025-08-14T21:22:30.3823064Z * [new branch] gh/rtimpe/7/head -> origin/gh/rtimpe/7/head 2025-08-14T21:22:30.3823937Z * [new branch] gh/rtimpe/7/orig -> origin/gh/rtimpe/7/orig 2025-08-14T21:22:30.3825178Z * [new branch] gh/rtimpe/8/base -> origin/gh/rtimpe/8/base 2025-08-14T21:22:30.3826109Z * [new branch] gh/rtimpe/8/head -> origin/gh/rtimpe/8/head 2025-08-14T21:22:30.3826986Z * [new branch] gh/rtimpe/8/orig -> origin/gh/rtimpe/8/orig 2025-08-14T21:22:30.3832617Z * [new branch] gh/rtimpe/9/base -> origin/gh/rtimpe/9/base 2025-08-14T21:22:30.3832920Z * [new branch] gh/rtimpe/9/head -> origin/gh/rtimpe/9/head 2025-08-14T21:22:30.3833121Z * [new branch] gh/rtimpe/9/orig -> origin/gh/rtimpe/9/orig 2025-08-14T21:22:30.3833378Z * [new branch] gh/ruisizhang123/1/base -> origin/gh/ruisizhang123/1/base 2025-08-14T21:22:30.3833574Z * [new branch] gh/ruisizhang123/1/head -> origin/gh/ruisizhang123/1/head 2025-08-14T21:22:30.3833938Z * [new branch] gh/ruisizhang123/1/orig -> origin/gh/ruisizhang123/1/orig 2025-08-14T21:22:30.3835463Z * [new branch] gh/ruisizhang123/4/base -> origin/gh/ruisizhang123/4/base 2025-08-14T21:22:30.3836529Z * [new branch] gh/ruisizhang123/4/head -> origin/gh/ruisizhang123/4/head 2025-08-14T21:22:30.3837452Z * [new branch] gh/ruisizhang123/4/orig -> origin/gh/ruisizhang123/4/orig 2025-08-14T21:22:30.3838712Z * [new branch] gh/ruisizhang123/5/base -> origin/gh/ruisizhang123/5/base 2025-08-14T21:22:30.3839703Z * [new branch] gh/ruisizhang123/5/head -> origin/gh/ruisizhang123/5/head 2025-08-14T21:22:30.3840609Z * [new branch] gh/ruisizhang123/5/orig -> origin/gh/ruisizhang123/5/orig 2025-08-14T21:22:30.3842021Z * [new branch] gh/ruisizhang123/6/base -> origin/gh/ruisizhang123/6/base 2025-08-14T21:22:30.3851611Z * [new branch] gh/ruisizhang123/6/head -> origin/gh/ruisizhang123/6/head 2025-08-14T21:22:30.3851842Z * [new branch] gh/ruisizhang123/6/orig -> origin/gh/ruisizhang123/6/orig 2025-08-14T21:22:30.3861888Z * [new branch] gh/ruisizhang123/7/base -> origin/gh/ruisizhang123/7/base 2025-08-14T21:22:30.3862124Z * [new branch] gh/ruisizhang123/7/head -> origin/gh/ruisizhang123/7/head 2025-08-14T21:22:30.3862360Z * [new branch] gh/ruisizhang123/7/orig -> origin/gh/ruisizhang123/7/orig 2025-08-14T21:22:30.3862592Z * [new branch] gh/ruisizhang123/8/base -> origin/gh/ruisizhang123/8/base 2025-08-14T21:22:30.3862970Z * [new branch] gh/ruisizhang123/8/head -> origin/gh/ruisizhang123/8/head 2025-08-14T21:22:30.3863952Z * [new branch] gh/ruisizhang123/8/orig -> origin/gh/ruisizhang123/8/orig 2025-08-14T21:22:30.3865450Z * [new branch] gh/sarckk/2/base -> origin/gh/sarckk/2/base 2025-08-14T21:22:30.3866375Z * [new branch] gh/sarckk/2/head -> origin/gh/sarckk/2/head 2025-08-14T21:22:30.3867301Z * [new branch] gh/sarckk/2/orig -> origin/gh/sarckk/2/orig 2025-08-14T21:22:30.3869240Z * [new branch] gh/seemethere/23/head -> origin/gh/seemethere/23/head 2025-08-14T21:22:30.3870482Z * [new branch] gh/seemethere/24/base -> origin/gh/seemethere/24/base 2025-08-14T21:22:30.3871423Z * [new branch] gh/seemethere/24/head -> origin/gh/seemethere/24/head 2025-08-14T21:22:30.3877194Z * [new branch] gh/seemethere/24/orig -> origin/gh/seemethere/24/orig 2025-08-14T21:22:30.3878570Z * [new branch] gh/seemethere/30/base -> origin/gh/seemethere/30/base 2025-08-14T21:22:30.3879487Z * [new branch] gh/seemethere/30/head -> origin/gh/seemethere/30/head 2025-08-14T21:22:30.3880479Z * [new branch] gh/seemethere/30/orig -> origin/gh/seemethere/30/orig 2025-08-14T21:22:30.3881751Z * [new branch] gh/seemethere/32/base -> origin/gh/seemethere/32/base 2025-08-14T21:22:30.3882634Z * [new branch] gh/seemethere/32/head -> origin/gh/seemethere/32/head 2025-08-14T21:22:30.3883589Z * [new branch] gh/seemethere/32/orig -> origin/gh/seemethere/32/orig 2025-08-14T21:22:30.3884767Z * [new branch] gh/seemethere/33/base -> origin/gh/seemethere/33/base 2025-08-14T21:22:30.3885669Z * [new branch] gh/seemethere/33/head -> origin/gh/seemethere/33/head 2025-08-14T21:22:30.3892822Z * [new branch] gh/seemethere/33/orig -> origin/gh/seemethere/33/orig 2025-08-14T21:22:30.3893036Z * [new branch] gh/seemethere/34/base -> origin/gh/seemethere/34/base 2025-08-14T21:22:30.3893259Z * [new branch] gh/seemethere/34/head -> origin/gh/seemethere/34/head 2025-08-14T21:22:30.3893492Z * [new branch] gh/seemethere/34/orig -> origin/gh/seemethere/34/orig 2025-08-14T21:22:30.3893707Z * [new branch] gh/seemethere/35/base -> origin/gh/seemethere/35/base 2025-08-14T21:22:30.3893927Z * [new branch] gh/seemethere/35/head -> origin/gh/seemethere/35/head 2025-08-14T21:22:30.3894141Z * [new branch] gh/seemethere/35/orig -> origin/gh/seemethere/35/orig 2025-08-14T21:22:30.3894468Z * [new branch] gh/seemethere/37/base -> origin/gh/seemethere/37/base 2025-08-14T21:22:30.3895386Z * [new branch] gh/seemethere/37/head -> origin/gh/seemethere/37/head 2025-08-14T21:22:30.3896315Z * [new branch] gh/seemethere/37/orig -> origin/gh/seemethere/37/orig 2025-08-14T21:22:30.3897551Z * [new branch] gh/seemethere/39/base -> origin/gh/seemethere/39/base 2025-08-14T21:22:30.3898455Z * [new branch] gh/seemethere/39/head -> origin/gh/seemethere/39/head 2025-08-14T21:22:30.3899283Z * [new branch] gh/seemethere/39/orig -> origin/gh/seemethere/39/orig 2025-08-14T21:22:30.3900594Z * [new branch] gh/seemethere/40/base -> origin/gh/seemethere/40/base 2025-08-14T21:22:30.3905849Z * [new branch] gh/seemethere/40/head -> origin/gh/seemethere/40/head 2025-08-14T21:22:30.3906774Z * [new branch] gh/seemethere/40/orig -> origin/gh/seemethere/40/orig 2025-08-14T21:22:30.3908021Z * [new branch] gh/seemethere/41/base -> origin/gh/seemethere/41/base 2025-08-14T21:22:30.3908911Z * [new branch] gh/seemethere/41/head -> origin/gh/seemethere/41/head 2025-08-14T21:22:30.3909923Z * [new branch] gh/seemethere/41/orig -> origin/gh/seemethere/41/orig 2025-08-14T21:22:30.3911121Z * [new branch] gh/seemethere/42/base -> origin/gh/seemethere/42/base 2025-08-14T21:22:30.3912068Z * [new branch] gh/seemethere/42/head -> origin/gh/seemethere/42/head 2025-08-14T21:22:30.3912981Z * [new branch] gh/seemethere/42/orig -> origin/gh/seemethere/42/orig 2025-08-14T21:22:30.3914245Z * [new branch] gh/seemethere/43/base -> origin/gh/seemethere/43/base 2025-08-14T21:22:30.3915179Z * [new branch] gh/seemethere/43/head -> origin/gh/seemethere/43/head 2025-08-14T21:22:30.3919960Z * [new branch] gh/seemethere/43/orig -> origin/gh/seemethere/43/orig 2025-08-14T21:22:30.3920228Z * [new branch] gh/seemethere/44/base -> origin/gh/seemethere/44/base 2025-08-14T21:22:30.3920458Z * [new branch] gh/seemethere/44/head -> origin/gh/seemethere/44/head 2025-08-14T21:22:30.3920658Z * [new branch] gh/seemethere/44/orig -> origin/gh/seemethere/44/orig 2025-08-14T21:22:30.3920919Z * [new branch] gh/seemethere/45/base -> origin/gh/seemethere/45/base 2025-08-14T21:22:30.3921548Z * [new branch] gh/seemethere/45/head -> origin/gh/seemethere/45/head 2025-08-14T21:22:30.3924041Z * [new branch] gh/seemethere/45/orig -> origin/gh/seemethere/45/orig 2025-08-14T21:22:30.3924266Z * [new branch] gh/seemethere/46/base -> origin/gh/seemethere/46/base 2025-08-14T21:22:30.3924735Z * [new branch] gh/seemethere/46/head -> origin/gh/seemethere/46/head 2025-08-14T21:22:30.3925734Z * [new branch] gh/seemethere/46/orig -> origin/gh/seemethere/46/orig 2025-08-14T21:22:30.3927270Z * [new branch] gh/seemethere/47/base -> origin/gh/seemethere/47/base 2025-08-14T21:22:30.3928169Z * [new branch] gh/seemethere/47/head -> origin/gh/seemethere/47/head 2025-08-14T21:22:30.3929147Z * [new branch] gh/seemethere/47/orig -> origin/gh/seemethere/47/orig 2025-08-14T21:22:30.3938939Z * [new branch] gh/seemethere/48/base -> origin/gh/seemethere/48/base 2025-08-14T21:22:30.3939919Z * [new branch] gh/seemethere/48/head -> origin/gh/seemethere/48/head 2025-08-14T21:22:30.3940881Z * [new branch] gh/seemethere/48/orig -> origin/gh/seemethere/48/orig 2025-08-14T21:22:30.3942540Z * [new branch] gh/seemethere/49/base -> origin/gh/seemethere/49/base 2025-08-14T21:22:30.3943448Z * [new branch] gh/seemethere/49/head -> origin/gh/seemethere/49/head 2025-08-14T21:22:30.3952993Z * [new branch] gh/seemethere/49/orig -> origin/gh/seemethere/49/orig 2025-08-14T21:22:30.3953212Z * [new branch] gh/seemethere/50/base -> origin/gh/seemethere/50/base 2025-08-14T21:22:30.3953434Z * [new branch] gh/seemethere/50/head -> origin/gh/seemethere/50/head 2025-08-14T21:22:30.3953644Z * [new branch] gh/seemethere/50/orig -> origin/gh/seemethere/50/orig 2025-08-14T21:22:30.3953874Z * [new branch] gh/seemethere/51/base -> origin/gh/seemethere/51/base 2025-08-14T21:22:30.3954217Z * [new branch] gh/seemethere/51/head -> origin/gh/seemethere/51/head 2025-08-14T21:22:30.3954404Z * [new branch] gh/seemethere/51/orig -> origin/gh/seemethere/51/orig 2025-08-14T21:22:30.3954935Z * [new branch] gh/seemethere/52/base -> origin/gh/seemethere/52/base 2025-08-14T21:22:30.3955891Z * [new branch] gh/seemethere/52/head -> origin/gh/seemethere/52/head 2025-08-14T21:22:30.3956792Z * [new branch] gh/seemethere/52/orig -> origin/gh/seemethere/52/orig 2025-08-14T21:22:30.3958117Z * [new branch] gh/seemethere/53/base -> origin/gh/seemethere/53/base 2025-08-14T21:22:30.3967115Z * [new branch] gh/seemethere/53/head -> origin/gh/seemethere/53/head 2025-08-14T21:22:30.3967356Z * [new branch] gh/seemethere/53/orig -> origin/gh/seemethere/53/orig 2025-08-14T21:22:30.3967583Z * [new branch] gh/seemethere/54/base -> origin/gh/seemethere/54/base 2025-08-14T21:22:30.3967750Z * [new branch] gh/seemethere/54/head -> origin/gh/seemethere/54/head 2025-08-14T21:22:30.3967929Z * [new branch] gh/seemethere/54/orig -> origin/gh/seemethere/54/orig 2025-08-14T21:22:30.3968096Z * [new branch] gh/seemethere/55/base -> origin/gh/seemethere/55/base 2025-08-14T21:22:30.3968263Z * [new branch] gh/seemethere/55/head -> origin/gh/seemethere/55/head 2025-08-14T21:22:30.3968436Z * [new branch] gh/seemethere/55/orig -> origin/gh/seemethere/55/orig 2025-08-14T21:22:30.3968605Z * [new branch] gh/seemethere/56/base -> origin/gh/seemethere/56/base 2025-08-14T21:22:30.3968781Z * [new branch] gh/seemethere/56/head -> origin/gh/seemethere/56/head 2025-08-14T21:22:30.3969748Z * [new branch] gh/seemethere/56/orig -> origin/gh/seemethere/56/orig 2025-08-14T21:22:30.3971226Z * [new branch] gh/seemethere/57/base -> origin/gh/seemethere/57/base 2025-08-14T21:22:30.3972125Z * [new branch] gh/seemethere/57/head -> origin/gh/seemethere/57/head 2025-08-14T21:22:30.3975236Z * [new branch] gh/seemethere/57/orig -> origin/gh/seemethere/57/orig 2025-08-14T21:22:30.3978689Z * [new branch] gh/seemethere/58/base -> origin/gh/seemethere/58/base 2025-08-14T21:22:30.3979572Z * [new branch] gh/seemethere/58/head -> origin/gh/seemethere/58/head 2025-08-14T21:22:30.3980522Z * [new branch] gh/seemethere/58/orig -> origin/gh/seemethere/58/orig 2025-08-14T21:22:30.3981782Z * [new branch] gh/seemethere/59/base -> origin/gh/seemethere/59/base 2025-08-14T21:22:30.3982717Z * [new branch] gh/seemethere/59/head -> origin/gh/seemethere/59/head 2025-08-14T21:22:30.3983655Z * [new branch] gh/seemethere/59/orig -> origin/gh/seemethere/59/orig 2025-08-14T21:22:30.3984900Z * [new branch] gh/seemethere/7/head -> origin/gh/seemethere/7/head 2025-08-14T21:22:30.3986681Z * [new branch] gh/shunting314/145/base -> origin/gh/shunting314/145/base 2025-08-14T21:22:30.3987770Z * [new branch] gh/shunting314/145/head -> origin/gh/shunting314/145/head 2025-08-14T21:22:30.3994459Z * [new branch] gh/shunting314/145/orig -> origin/gh/shunting314/145/orig 2025-08-14T21:22:30.3994702Z * [new branch] gh/shunting314/176/base -> origin/gh/shunting314/176/base 2025-08-14T21:22:30.3994927Z * [new branch] gh/shunting314/176/head -> origin/gh/shunting314/176/head 2025-08-14T21:22:30.3995110Z * [new branch] gh/shunting314/176/orig -> origin/gh/shunting314/176/orig 2025-08-14T21:22:30.3995289Z * [new branch] gh/shunting314/211/base -> origin/gh/shunting314/211/base 2025-08-14T21:22:30.3995543Z * [new branch] gh/shunting314/211/head -> origin/gh/shunting314/211/head 2025-08-14T21:22:30.3995786Z * [new branch] gh/shunting314/211/orig -> origin/gh/shunting314/211/orig 2025-08-14T21:22:30.3996973Z * [new branch] gh/shunting314/212/base -> origin/gh/shunting314/212/base 2025-08-14T21:22:30.3997869Z * [new branch] gh/shunting314/212/head -> origin/gh/shunting314/212/head 2025-08-14T21:22:30.3998775Z * [new branch] gh/shunting314/212/orig -> origin/gh/shunting314/212/orig 2025-08-14T21:22:30.4000413Z * [new branch] gh/shunting314/213/base -> origin/gh/shunting314/213/base 2025-08-14T21:22:30.4001446Z * [new branch] gh/shunting314/213/head -> origin/gh/shunting314/213/head 2025-08-14T21:22:30.4010903Z * [new branch] gh/shunting314/213/orig -> origin/gh/shunting314/213/orig 2025-08-14T21:22:30.4011131Z * [new branch] gh/silverguo/1/base -> origin/gh/silverguo/1/base 2025-08-14T21:22:30.4011353Z * [new branch] gh/silverguo/1/head -> origin/gh/silverguo/1/head 2025-08-14T21:22:30.4011573Z * [new branch] gh/silverguo/2/base -> origin/gh/silverguo/2/base 2025-08-14T21:22:30.4011789Z * [new branch] gh/silverguo/2/head -> origin/gh/silverguo/2/head 2025-08-14T21:22:30.4012408Z * [new branch] gh/silverguo/3/base -> origin/gh/silverguo/3/base 2025-08-14T21:22:30.4013378Z * [new branch] gh/silverguo/3/head -> origin/gh/silverguo/3/head 2025-08-14T21:22:30.4014455Z * [new branch] gh/silverguo/4/base -> origin/gh/silverguo/4/base 2025-08-14T21:22:30.4015380Z * [new branch] gh/silverguo/4/head -> origin/gh/silverguo/4/head 2025-08-14T21:22:30.4021361Z * [new branch] gh/sinhaanhsul/1/base -> origin/gh/sinhaanhsul/1/base 2025-08-14T21:22:30.4021627Z * [new branch] gh/sinhaanhsul/1/head -> origin/gh/sinhaanhsul/1/head 2025-08-14T21:22:30.4021790Z * [new branch] gh/skarjala/11/base -> origin/gh/skarjala/11/base 2025-08-14T21:22:30.4021951Z * [new branch] gh/skarjala/11/head -> origin/gh/skarjala/11/head 2025-08-14T21:22:30.4022120Z * [new branch] gh/skarjala/11/orig -> origin/gh/skarjala/11/orig 2025-08-14T21:22:30.4023726Z * [new branch] gh/skarjala/13/base -> origin/gh/skarjala/13/base 2025-08-14T21:22:30.4024673Z * [new branch] gh/skarjala/13/head -> origin/gh/skarjala/13/head 2025-08-14T21:22:30.4025604Z * [new branch] gh/skarjala/13/orig -> origin/gh/skarjala/13/orig 2025-08-14T21:22:30.4026908Z * [new branch] gh/skarjala/14/base -> origin/gh/skarjala/14/base 2025-08-14T21:22:30.4027825Z * [new branch] gh/skarjala/14/head -> origin/gh/skarjala/14/head 2025-08-14T21:22:30.4028792Z * [new branch] gh/skarjala/14/orig -> origin/gh/skarjala/14/orig 2025-08-14T21:22:30.4030027Z * [new branch] gh/skarjala/15/base -> origin/gh/skarjala/15/base 2025-08-14T21:22:30.4030933Z * [new branch] gh/skarjala/15/head -> origin/gh/skarjala/15/head 2025-08-14T21:22:30.4036298Z * [new branch] gh/skarjala/15/orig -> origin/gh/skarjala/15/orig 2025-08-14T21:22:30.4037538Z * [new branch] gh/skarjala/16/base -> origin/gh/skarjala/16/base 2025-08-14T21:22:30.4038479Z * [new branch] gh/skarjala/16/head -> origin/gh/skarjala/16/head 2025-08-14T21:22:30.4039423Z * [new branch] gh/skarjala/16/orig -> origin/gh/skarjala/16/orig 2025-08-14T21:22:30.4040647Z * [new branch] gh/skarjala/17/base -> origin/gh/skarjala/17/base 2025-08-14T21:22:30.4041714Z * [new branch] gh/skarjala/17/head -> origin/gh/skarjala/17/head 2025-08-14T21:22:30.4042608Z * [new branch] gh/skarjala/17/orig -> origin/gh/skarjala/17/orig 2025-08-14T21:22:30.4043874Z * [new branch] gh/skarjala/18/base -> origin/gh/skarjala/18/base 2025-08-14T21:22:30.4044793Z * [new branch] gh/skarjala/18/head -> origin/gh/skarjala/18/head 2025-08-14T21:22:30.4045809Z * [new branch] gh/skarjala/18/orig -> origin/gh/skarjala/18/orig 2025-08-14T21:22:30.4050590Z * [new branch] gh/skarjala/19/base -> origin/gh/skarjala/19/base 2025-08-14T21:22:30.4050818Z * [new branch] gh/skarjala/19/head -> origin/gh/skarjala/19/head 2025-08-14T21:22:30.4051025Z * [new branch] gh/skarjala/19/orig -> origin/gh/skarjala/19/orig 2025-08-14T21:22:30.4051222Z * [new branch] gh/soulitzer/269/base -> origin/gh/soulitzer/269/base 2025-08-14T21:22:30.4052149Z * [new branch] gh/soulitzer/269/head -> origin/gh/soulitzer/269/head 2025-08-14T21:22:30.4053090Z * [new branch] gh/soulitzer/269/orig -> origin/gh/soulitzer/269/orig 2025-08-14T21:22:30.4054477Z * [new branch] gh/soulitzer/276/base -> origin/gh/soulitzer/276/base 2025-08-14T21:22:30.4055395Z * [new branch] gh/soulitzer/276/head -> origin/gh/soulitzer/276/head 2025-08-14T21:22:30.4056323Z * [new branch] gh/soulitzer/276/orig -> origin/gh/soulitzer/276/orig 2025-08-14T21:22:30.4057813Z * [new branch] gh/soulitzer/287/base -> origin/gh/soulitzer/287/base 2025-08-14T21:22:30.4058769Z * [new branch] gh/soulitzer/287/head -> origin/gh/soulitzer/287/head 2025-08-14T21:22:30.4059694Z * [new branch] gh/soulitzer/287/orig -> origin/gh/soulitzer/287/orig 2025-08-14T21:22:30.4069578Z * [new branch] gh/soulitzer/296/base -> origin/gh/soulitzer/296/base 2025-08-14T21:22:30.4070800Z * [new branch] gh/soulitzer/296/head -> origin/gh/soulitzer/296/head 2025-08-14T21:22:30.4071689Z * [new branch] gh/soulitzer/296/orig -> origin/gh/soulitzer/296/orig 2025-08-14T21:22:30.4072989Z * [new branch] gh/soulitzer/299/base -> origin/gh/soulitzer/299/base 2025-08-14T21:22:30.4073965Z * [new branch] gh/soulitzer/299/head -> origin/gh/soulitzer/299/head 2025-08-14T21:22:30.4083231Z * [new branch] gh/soulitzer/299/orig -> origin/gh/soulitzer/299/orig 2025-08-14T21:22:30.4083462Z * [new branch] gh/soulitzer/300/base -> origin/gh/soulitzer/300/base 2025-08-14T21:22:30.4083688Z * [new branch] gh/soulitzer/300/head -> origin/gh/soulitzer/300/head 2025-08-14T21:22:30.4083898Z * [new branch] gh/soulitzer/300/orig -> origin/gh/soulitzer/300/orig 2025-08-14T21:22:30.4084125Z * [new branch] gh/soulitzer/301/base -> origin/gh/soulitzer/301/base 2025-08-14T21:22:30.4084348Z * [new branch] gh/soulitzer/301/head -> origin/gh/soulitzer/301/head 2025-08-14T21:22:30.4084561Z * [new branch] gh/soulitzer/301/orig -> origin/gh/soulitzer/301/orig 2025-08-14T21:22:30.4084787Z * [new branch] gh/soulitzer/313/base -> origin/gh/soulitzer/313/base 2025-08-14T21:22:30.4084957Z * [new branch] gh/soulitzer/313/head -> origin/gh/soulitzer/313/head 2025-08-14T21:22:30.4085148Z * [new branch] gh/soulitzer/313/orig -> origin/gh/soulitzer/313/orig 2025-08-14T21:22:30.4086485Z * [new branch] gh/soulitzer/319/base -> origin/gh/soulitzer/319/base 2025-08-14T21:22:30.4087435Z * [new branch] gh/soulitzer/319/head -> origin/gh/soulitzer/319/head 2025-08-14T21:22:30.4088293Z * [new branch] gh/soulitzer/319/orig -> origin/gh/soulitzer/319/orig 2025-08-14T21:22:30.4089719Z * [new branch] gh/soulitzer/320/base -> origin/gh/soulitzer/320/base 2025-08-14T21:22:30.4091861Z * [new branch] gh/soulitzer/320/head -> origin/gh/soulitzer/320/head 2025-08-14T21:22:30.4092093Z * [new branch] gh/soulitzer/320/orig -> origin/gh/soulitzer/320/orig 2025-08-14T21:22:30.4097755Z * [new branch] gh/soulitzer/336/base -> origin/gh/soulitzer/336/base 2025-08-14T21:22:30.4097998Z * [new branch] gh/soulitzer/336/head -> origin/gh/soulitzer/336/head 2025-08-14T21:22:30.4098195Z * [new branch] gh/soulitzer/336/orig -> origin/gh/soulitzer/336/orig 2025-08-14T21:22:30.4098369Z * [new branch] gh/soulitzer/347/base -> origin/gh/soulitzer/347/base 2025-08-14T21:22:30.4098535Z * [new branch] gh/soulitzer/347/head -> origin/gh/soulitzer/347/head 2025-08-14T21:22:30.4098701Z * [new branch] gh/soulitzer/347/orig -> origin/gh/soulitzer/347/orig 2025-08-14T21:22:30.4099207Z * [new branch] gh/soulitzer/349/base -> origin/gh/soulitzer/349/base 2025-08-14T21:22:30.4100248Z * [new branch] gh/soulitzer/349/head -> origin/gh/soulitzer/349/head 2025-08-14T21:22:30.4101201Z * [new branch] gh/soulitzer/349/orig -> origin/gh/soulitzer/349/orig 2025-08-14T21:22:30.4102380Z * [new branch] gh/soulitzer/350/base -> origin/gh/soulitzer/350/base 2025-08-14T21:22:30.4103251Z * [new branch] gh/soulitzer/350/head -> origin/gh/soulitzer/350/head 2025-08-14T21:22:30.4108246Z * [new branch] gh/soulitzer/350/orig -> origin/gh/soulitzer/350/orig 2025-08-14T21:22:30.4112599Z * [new branch] gh/soulitzer/351/base -> origin/gh/soulitzer/351/base 2025-08-14T21:22:30.4113029Z * [new branch] gh/soulitzer/351/head -> origin/gh/soulitzer/351/head 2025-08-14T21:22:30.4114067Z * [new branch] gh/soulitzer/351/orig -> origin/gh/soulitzer/351/orig 2025-08-14T21:22:30.4115244Z * [new branch] gh/soulitzer/353/base -> origin/gh/soulitzer/353/base 2025-08-14T21:22:30.4116268Z * [new branch] gh/soulitzer/353/head -> origin/gh/soulitzer/353/head 2025-08-14T21:22:30.4117618Z * [new branch] gh/soulitzer/353/orig -> origin/gh/soulitzer/353/orig 2025-08-14T21:22:30.4124765Z * [new branch] gh/soulitzer/358/base -> origin/gh/soulitzer/358/base 2025-08-14T21:22:30.4124991Z * [new branch] gh/soulitzer/358/head -> origin/gh/soulitzer/358/head 2025-08-14T21:22:30.4125205Z * [new branch] gh/soulitzer/358/orig -> origin/gh/soulitzer/358/orig 2025-08-14T21:22:30.4125428Z * [new branch] gh/soulitzer/359/base -> origin/gh/soulitzer/359/base 2025-08-14T21:22:30.4125640Z * [new branch] gh/soulitzer/359/head -> origin/gh/soulitzer/359/head 2025-08-14T21:22:30.4125861Z * [new branch] gh/soulitzer/359/orig -> origin/gh/soulitzer/359/orig 2025-08-14T21:22:30.4126797Z * [new branch] gh/soulitzer/362/base -> origin/gh/soulitzer/362/base 2025-08-14T21:22:30.4127769Z * [new branch] gh/soulitzer/362/head -> origin/gh/soulitzer/362/head 2025-08-14T21:22:30.4128692Z * [new branch] gh/soulitzer/362/orig -> origin/gh/soulitzer/362/orig 2025-08-14T21:22:30.4129919Z * [new branch] gh/soulitzer/372/base -> origin/gh/soulitzer/372/base 2025-08-14T21:22:30.4130844Z * [new branch] gh/soulitzer/372/head -> origin/gh/soulitzer/372/head 2025-08-14T21:22:30.4131733Z * [new branch] gh/soulitzer/372/orig -> origin/gh/soulitzer/372/orig 2025-08-14T21:22:30.4137628Z * [new branch] gh/swolchok/728/next -> origin/gh/swolchok/728/next 2025-08-14T21:22:30.4138901Z * [new branch] gh/swolchok/758/base -> origin/gh/swolchok/758/base 2025-08-14T21:22:30.4139822Z * [new branch] gh/swolchok/758/head -> origin/gh/swolchok/758/head 2025-08-14T21:22:30.4140821Z * [new branch] gh/swolchok/758/orig -> origin/gh/swolchok/758/orig 2025-08-14T21:22:30.4142402Z * [new branch] gh/swolchok/767/base -> origin/gh/swolchok/767/base 2025-08-14T21:22:30.4143492Z * [new branch] gh/swolchok/767/head -> origin/gh/swolchok/767/head 2025-08-14T21:22:30.4144655Z * [new branch] gh/swolchok/767/orig -> origin/gh/swolchok/767/orig 2025-08-14T21:22:30.4146014Z * [new branch] gh/swolchok/768/base -> origin/gh/swolchok/768/base 2025-08-14T21:22:30.4147013Z * [new branch] gh/swolchok/768/head -> origin/gh/swolchok/768/head 2025-08-14T21:22:30.4152066Z * [new branch] gh/swolchok/768/orig -> origin/gh/swolchok/768/orig 2025-08-14T21:22:30.4152294Z * [new branch] gh/swolchok/769/base -> origin/gh/swolchok/769/base 2025-08-14T21:22:30.4152502Z * [new branch] gh/swolchok/769/head -> origin/gh/swolchok/769/head 2025-08-14T21:22:30.4152671Z * [new branch] gh/swolchok/769/orig -> origin/gh/swolchok/769/orig 2025-08-14T21:22:30.4154025Z * [new branch] gh/swolchok/771/base -> origin/gh/swolchok/771/base 2025-08-14T21:22:30.4155015Z * [new branch] gh/swolchok/771/head -> origin/gh/swolchok/771/head 2025-08-14T21:22:30.4156037Z * [new branch] gh/swolchok/771/orig -> origin/gh/swolchok/771/orig 2025-08-14T21:22:30.4157257Z * [new branch] gh/swolchok/772/base -> origin/gh/swolchok/772/base 2025-08-14T21:22:30.4158233Z * [new branch] gh/swolchok/772/head -> origin/gh/swolchok/772/head 2025-08-14T21:22:30.4159262Z * [new branch] gh/swolchok/772/orig -> origin/gh/swolchok/772/orig 2025-08-14T21:22:30.4160719Z * [new branch] gh/swolchok/773/base -> origin/gh/swolchok/773/base 2025-08-14T21:22:30.4161726Z * [new branch] gh/swolchok/773/head -> origin/gh/swolchok/773/head 2025-08-14T21:22:30.4170298Z * [new branch] gh/swolchok/773/orig -> origin/gh/swolchok/773/orig 2025-08-14T21:22:30.4170518Z * [new branch] gh/swolchok/786/base -> origin/gh/swolchok/786/base 2025-08-14T21:22:30.4170721Z * [new branch] gh/swolchok/786/head -> origin/gh/swolchok/786/head 2025-08-14T21:22:30.4170927Z * [new branch] gh/swolchok/786/orig -> origin/gh/swolchok/786/orig 2025-08-14T21:22:30.4171430Z * [new branch] gh/swolchok/787/base -> origin/gh/swolchok/787/base 2025-08-14T21:22:30.4172406Z * [new branch] gh/swolchok/787/head -> origin/gh/swolchok/787/head 2025-08-14T21:22:30.4173363Z * [new branch] gh/swolchok/787/orig -> origin/gh/swolchok/787/orig 2025-08-14T21:22:30.4175124Z * [new branch] gh/syed-ahmed/2/base -> origin/gh/syed-ahmed/2/base 2025-08-14T21:22:30.4180839Z * [new branch] gh/syed-ahmed/2/head -> origin/gh/syed-ahmed/2/head 2025-08-14T21:22:30.4181012Z * [new branch] gh/syed-ahmed/2/orig -> origin/gh/syed-ahmed/2/orig 2025-08-14T21:22:30.4181176Z * [new branch] gh/syed-ahmed/3/base -> origin/gh/syed-ahmed/3/base 2025-08-14T21:22:30.4181344Z * [new branch] gh/syed-ahmed/3/head -> origin/gh/syed-ahmed/3/head 2025-08-14T21:22:30.4181502Z * [new branch] gh/syed-ahmed/3/orig -> origin/gh/syed-ahmed/3/orig 2025-08-14T21:22:30.4182079Z * [new branch] gh/syed-ahmed/4/base -> origin/gh/syed-ahmed/4/base 2025-08-14T21:22:30.4183384Z * [new branch] gh/syed-ahmed/4/head -> origin/gh/syed-ahmed/4/head 2025-08-14T21:22:30.4184374Z * [new branch] gh/syed-ahmed/4/orig -> origin/gh/syed-ahmed/4/orig 2025-08-14T21:22:30.4185893Z * [new branch] gh/teja-rao/3/base -> origin/gh/teja-rao/3/base 2025-08-14T21:22:30.4186851Z * [new branch] gh/teja-rao/3/head -> origin/gh/teja-rao/3/head 2025-08-14T21:22:30.4187770Z * [new branch] gh/teja-rao/3/orig -> origin/gh/teja-rao/3/orig 2025-08-14T21:22:30.4189336Z * [new branch] gh/tianyu-l/2/base -> origin/gh/tianyu-l/2/base 2025-08-14T21:22:30.4190296Z * [new branch] gh/tianyu-l/2/head -> origin/gh/tianyu-l/2/head 2025-08-14T21:22:30.4195611Z * [new branch] gh/tianyu-l/2/orig -> origin/gh/tianyu-l/2/orig 2025-08-14T21:22:30.4197209Z * [new branch] gh/titaiwangms/1/base -> origin/gh/titaiwangms/1/base 2025-08-14T21:22:30.4198150Z * [new branch] gh/titaiwangms/1/head -> origin/gh/titaiwangms/1/head 2025-08-14T21:22:30.4199068Z * [new branch] gh/titaiwangms/1/orig -> origin/gh/titaiwangms/1/orig 2025-08-14T21:22:30.4200301Z * [new branch] gh/titaiwangms/2/base -> origin/gh/titaiwangms/2/base 2025-08-14T21:22:30.4201279Z * [new branch] gh/titaiwangms/2/head -> origin/gh/titaiwangms/2/head 2025-08-14T21:22:30.4202254Z * [new branch] gh/titaiwangms/2/orig -> origin/gh/titaiwangms/2/orig 2025-08-14T21:22:30.4203514Z * [new branch] gh/titaiwangms/3/base -> origin/gh/titaiwangms/3/base 2025-08-14T21:22:30.4204462Z * [new branch] gh/titaiwangms/3/head -> origin/gh/titaiwangms/3/head 2025-08-14T21:22:30.4214059Z * [new branch] gh/titaiwangms/3/orig -> origin/gh/titaiwangms/3/orig 2025-08-14T21:22:30.4214286Z * [new branch] gh/titaiwangms/4/base -> origin/gh/titaiwangms/4/base 2025-08-14T21:22:30.4214509Z * [new branch] gh/titaiwangms/4/head -> origin/gh/titaiwangms/4/head 2025-08-14T21:22:30.4214797Z * [new branch] gh/titaiwangms/4/orig -> origin/gh/titaiwangms/4/orig 2025-08-14T21:22:30.4215025Z * [new branch] gh/titaiwangms/5/base -> origin/gh/titaiwangms/5/base 2025-08-14T21:22:30.4215251Z * [new branch] gh/titaiwangms/5/head -> origin/gh/titaiwangms/5/head 2025-08-14T21:22:30.4215422Z * [new branch] gh/titaiwangms/5/orig -> origin/gh/titaiwangms/5/orig 2025-08-14T21:22:30.4215598Z * [new branch] gh/titaiwangms/6/base -> origin/gh/titaiwangms/6/base 2025-08-14T21:22:30.4215765Z * [new branch] gh/titaiwangms/6/head -> origin/gh/titaiwangms/6/head 2025-08-14T21:22:30.4215935Z * [new branch] gh/titaiwangms/6/orig -> origin/gh/titaiwangms/6/orig 2025-08-14T21:22:30.4216130Z * [new branch] gh/titaiwangms/7/base -> origin/gh/titaiwangms/7/base 2025-08-14T21:22:30.4217067Z * [new branch] gh/titaiwangms/7/head -> origin/gh/titaiwangms/7/head 2025-08-14T21:22:30.4218068Z * [new branch] gh/titaiwangms/7/orig -> origin/gh/titaiwangms/7/orig 2025-08-14T21:22:30.4219281Z * [new branch] gh/titaiwangms/8/base -> origin/gh/titaiwangms/8/base 2025-08-14T21:22:30.4224060Z * [new branch] gh/titaiwangms/8/head -> origin/gh/titaiwangms/8/head 2025-08-14T21:22:30.4229699Z * [new branch] gh/titaiwangms/8/orig -> origin/gh/titaiwangms/8/orig 2025-08-14T21:22:30.4231312Z * [new branch] gh/tugsbayasgalan/1/base -> origin/gh/tugsbayasgalan/1/base 2025-08-14T21:22:30.4232215Z * [new branch] gh/tugsbayasgalan/1/head -> origin/gh/tugsbayasgalan/1/head 2025-08-14T21:22:30.4233193Z * [new branch] gh/tugsbayasgalan/1/orig -> origin/gh/tugsbayasgalan/1/orig 2025-08-14T21:22:30.4242793Z * [new branch] gh/v0i0/1/base -> origin/gh/v0i0/1/base 2025-08-14T21:22:30.4242979Z * [new branch] gh/v0i0/1/head -> origin/gh/v0i0/1/head 2025-08-14T21:22:30.4243172Z * [new branch] gh/v0i0/1/orig -> origin/gh/v0i0/1/orig 2025-08-14T21:22:30.4243457Z * [new branch] gh/v0i0/2/base -> origin/gh/v0i0/2/base 2025-08-14T21:22:30.4243636Z * [new branch] gh/v0i0/2/head -> origin/gh/v0i0/2/head 2025-08-14T21:22:30.4243821Z * [new branch] gh/v0i0/2/orig -> origin/gh/v0i0/2/orig 2025-08-14T21:22:30.4244014Z * [new branch] gh/v0i0/3/base -> origin/gh/v0i0/3/base 2025-08-14T21:22:30.4244980Z * [new branch] gh/v0i0/3/head -> origin/gh/v0i0/3/head 2025-08-14T21:22:30.4245891Z * [new branch] gh/v0i0/3/orig -> origin/gh/v0i0/3/orig 2025-08-14T21:22:30.4247178Z * [new branch] gh/v0i0/4/base -> origin/gh/v0i0/4/base 2025-08-14T21:22:30.4248176Z * [new branch] gh/v0i0/4/head -> origin/gh/v0i0/4/head 2025-08-14T21:22:30.4257289Z * [new branch] gh/v0i0/4/orig -> origin/gh/v0i0/4/orig 2025-08-14T21:22:30.4257491Z * [new branch] gh/v0i0/5/base -> origin/gh/v0i0/5/base 2025-08-14T21:22:30.4257668Z * [new branch] gh/v0i0/5/head -> origin/gh/v0i0/5/head 2025-08-14T21:22:30.4257842Z * [new branch] gh/v0i0/5/orig -> origin/gh/v0i0/5/orig 2025-08-14T21:22:30.4258026Z * [new branch] gh/v0i0/6/base -> origin/gh/v0i0/6/base 2025-08-14T21:22:30.4258201Z * [new branch] gh/v0i0/6/head -> origin/gh/v0i0/6/head 2025-08-14T21:22:30.4258376Z * [new branch] gh/v0i0/6/orig -> origin/gh/v0i0/6/orig 2025-08-14T21:22:30.4258579Z * [new branch] gh/vkuzo/1/next -> origin/gh/vkuzo/1/next 2025-08-14T21:22:30.4258893Z * [new branch] gh/vkuzo/2/next -> origin/gh/vkuzo/2/next 2025-08-14T21:22:30.4260092Z * [new branch] gh/vkuzo/3/next -> origin/gh/vkuzo/3/next 2025-08-14T21:22:30.4261503Z * [new branch] gh/wconstab/392/base -> origin/gh/wconstab/392/base 2025-08-14T21:22:30.4262463Z * [new branch] gh/wconstab/392/head -> origin/gh/wconstab/392/head 2025-08-14T21:22:30.4267590Z * [new branch] gh/wconstab/392/orig -> origin/gh/wconstab/392/orig 2025-08-14T21:22:30.4269142Z * [new branch] gh/wconstab/419/base -> origin/gh/wconstab/419/base 2025-08-14T21:22:30.4270005Z * [new branch] gh/wconstab/419/head -> origin/gh/wconstab/419/head 2025-08-14T21:22:30.4270929Z * [new branch] gh/wconstab/419/orig -> origin/gh/wconstab/419/orig 2025-08-14T21:22:30.4272357Z * [new branch] gh/wconstab/424/base -> origin/gh/wconstab/424/base 2025-08-14T21:22:30.4273218Z * [new branch] gh/wconstab/424/head -> origin/gh/wconstab/424/head 2025-08-14T21:22:30.4274150Z * [new branch] gh/wconstab/424/orig -> origin/gh/wconstab/424/orig 2025-08-14T21:22:30.4275349Z * [new branch] gh/wconstab/425/base -> origin/gh/wconstab/425/base 2025-08-14T21:22:30.4276313Z * [new branch] gh/wconstab/425/head -> origin/gh/wconstab/425/head 2025-08-14T21:22:30.4277569Z * [new branch] gh/wconstab/425/orig -> origin/gh/wconstab/425/orig 2025-08-14T21:22:30.4282321Z * [new branch] gh/wconstab/426/base -> origin/gh/wconstab/426/base 2025-08-14T21:22:30.4282495Z * [new branch] gh/wconstab/426/head -> origin/gh/wconstab/426/head 2025-08-14T21:22:30.4282665Z * [new branch] gh/wconstab/426/orig -> origin/gh/wconstab/426/orig 2025-08-14T21:22:30.4282840Z * [new branch] gh/wconstab/427/base -> origin/gh/wconstab/427/base 2025-08-14T21:22:30.4283331Z * [new branch] gh/wconstab/427/head -> origin/gh/wconstab/427/head 2025-08-14T21:22:30.4284379Z * [new branch] gh/wconstab/427/orig -> origin/gh/wconstab/427/orig 2025-08-14T21:22:30.4286016Z * [new branch] gh/wconstab/428/base -> origin/gh/wconstab/428/base 2025-08-14T21:22:30.4286968Z * [new branch] gh/wconstab/428/head -> origin/gh/wconstab/428/head 2025-08-14T21:22:30.4288235Z * [new branch] gh/wconstab/428/orig -> origin/gh/wconstab/428/orig 2025-08-14T21:22:30.4289826Z * [new branch] gh/wconstab/429/base -> origin/gh/wconstab/429/base 2025-08-14T21:22:30.4290847Z * [new branch] gh/wconstab/429/head -> origin/gh/wconstab/429/head 2025-08-14T21:22:30.4291781Z * [new branch] gh/wconstab/429/orig -> origin/gh/wconstab/429/orig 2025-08-14T21:22:30.4300915Z * [new branch] gh/wconstab/430/base -> origin/gh/wconstab/430/base 2025-08-14T21:22:30.4301132Z * [new branch] gh/wconstab/430/head -> origin/gh/wconstab/430/head 2025-08-14T21:22:30.4301342Z * [new branch] gh/wconstab/430/orig -> origin/gh/wconstab/430/orig 2025-08-14T21:22:30.4301551Z * [new branch] gh/wconstab/431/base -> origin/gh/wconstab/431/base 2025-08-14T21:22:30.4302174Z * [new branch] gh/wconstab/431/head -> origin/gh/wconstab/431/head 2025-08-14T21:22:30.4303145Z * [new branch] gh/wconstab/431/orig -> origin/gh/wconstab/431/orig 2025-08-14T21:22:30.4304455Z * [new branch] gh/wconstab/432/base -> origin/gh/wconstab/432/base 2025-08-14T21:22:30.4305368Z * [new branch] gh/wconstab/432/head -> origin/gh/wconstab/432/head 2025-08-14T21:22:30.4306288Z * [new branch] gh/wconstab/432/orig -> origin/gh/wconstab/432/orig 2025-08-14T21:22:30.4311303Z * [new branch] gh/wconstab/433/base -> origin/gh/wconstab/433/base 2025-08-14T21:22:30.4311533Z * [new branch] gh/wconstab/433/head -> origin/gh/wconstab/433/head 2025-08-14T21:22:30.4311700Z * [new branch] gh/wconstab/433/orig -> origin/gh/wconstab/433/orig 2025-08-14T21:22:30.4311875Z * [new branch] gh/wconstab/434/base -> origin/gh/wconstab/434/base 2025-08-14T21:22:30.4312042Z * [new branch] gh/wconstab/434/head -> origin/gh/wconstab/434/head 2025-08-14T21:22:30.4312775Z * [new branch] gh/wconstab/434/orig -> origin/gh/wconstab/434/orig 2025-08-14T21:22:30.4314135Z * [new branch] gh/wconstab/435/base -> origin/gh/wconstab/435/base 2025-08-14T21:22:30.4315076Z * [new branch] gh/wconstab/435/head -> origin/gh/wconstab/435/head 2025-08-14T21:22:30.4315988Z * [new branch] gh/wconstab/435/orig -> origin/gh/wconstab/435/orig 2025-08-14T21:22:30.4317295Z * [new branch] gh/wconstab/436/base -> origin/gh/wconstab/436/base 2025-08-14T21:22:30.4318276Z * [new branch] gh/wconstab/436/head -> origin/gh/wconstab/436/head 2025-08-14T21:22:30.4319191Z * [new branch] gh/wconstab/436/orig -> origin/gh/wconstab/436/orig 2025-08-14T21:22:30.4320801Z * [new branch] gh/wconstab/437/base -> origin/gh/wconstab/437/base 2025-08-14T21:22:30.4326225Z * [new branch] gh/wconstab/437/head -> origin/gh/wconstab/437/head 2025-08-14T21:22:30.4327183Z * [new branch] gh/wconstab/437/orig -> origin/gh/wconstab/437/orig 2025-08-14T21:22:30.4328752Z * [new branch] gh/wconstab/438/base -> origin/gh/wconstab/438/base 2025-08-14T21:22:30.4329693Z * [new branch] gh/wconstab/438/head -> origin/gh/wconstab/438/head 2025-08-14T21:22:30.4330626Z * [new branch] gh/wconstab/438/orig -> origin/gh/wconstab/438/orig 2025-08-14T21:22:30.4332267Z * [new branch] gh/wconstab/439/base -> origin/gh/wconstab/439/base 2025-08-14T21:22:30.4333285Z * [new branch] gh/wconstab/439/head -> origin/gh/wconstab/439/head 2025-08-14T21:22:30.4334219Z * [new branch] gh/wconstab/439/orig -> origin/gh/wconstab/439/orig 2025-08-14T21:22:30.4335529Z * [new branch] gh/wconstab/440/base -> origin/gh/wconstab/440/base 2025-08-14T21:22:30.4342261Z * [new branch] gh/wconstab/440/head -> origin/gh/wconstab/440/head 2025-08-14T21:22:30.4342432Z * [new branch] gh/wconstab/440/orig -> origin/gh/wconstab/440/orig 2025-08-14T21:22:30.4342593Z * [new branch] gh/wconstab/441/base -> origin/gh/wconstab/441/base 2025-08-14T21:22:30.4342761Z * [new branch] gh/wconstab/441/head -> origin/gh/wconstab/441/head 2025-08-14T21:22:30.4342923Z * [new branch] gh/wconstab/441/orig -> origin/gh/wconstab/441/orig 2025-08-14T21:22:30.4343100Z * [new branch] gh/wconstab/442/base -> origin/gh/wconstab/442/base 2025-08-14T21:22:30.4343676Z * [new branch] gh/wconstab/442/head -> origin/gh/wconstab/442/head 2025-08-14T21:22:30.4344634Z * [new branch] gh/wconstab/442/orig -> origin/gh/wconstab/442/orig 2025-08-14T21:22:30.4346300Z * [new branch] gh/weifengpy/27/base -> origin/gh/weifengpy/27/base 2025-08-14T21:22:30.4347166Z * [new branch] gh/weifengpy/27/head -> origin/gh/weifengpy/27/head 2025-08-14T21:22:30.4348086Z * [new branch] gh/weifengpy/27/orig -> origin/gh/weifengpy/27/orig 2025-08-14T21:22:30.4349838Z * [new branch] gh/weifengpy/30/base -> origin/gh/weifengpy/30/base 2025-08-14T21:22:30.4355048Z * [new branch] gh/weifengpy/30/head -> origin/gh/weifengpy/30/head 2025-08-14T21:22:30.4355971Z * [new branch] gh/weifengpy/30/orig -> origin/gh/weifengpy/30/orig 2025-08-14T21:22:30.4357315Z * [new branch] gh/weifengpy/31/base -> origin/gh/weifengpy/31/base 2025-08-14T21:22:30.4358105Z * [new branch] gh/weifengpy/31/head -> origin/gh/weifengpy/31/head 2025-08-14T21:22:30.4359066Z * [new branch] gh/weifengpy/31/orig -> origin/gh/weifengpy/31/orig 2025-08-14T21:22:30.4360255Z * [new branch] gh/weifengpy/32/base -> origin/gh/weifengpy/32/base 2025-08-14T21:22:30.4361188Z * [new branch] gh/weifengpy/32/head -> origin/gh/weifengpy/32/head 2025-08-14T21:22:30.4362195Z * [new branch] gh/weifengpy/32/orig -> origin/gh/weifengpy/32/orig 2025-08-14T21:22:30.4363424Z * [new branch] gh/weifengpy/33/base -> origin/gh/weifengpy/33/base 2025-08-14T21:22:30.4364325Z * [new branch] gh/weifengpy/33/head -> origin/gh/weifengpy/33/head 2025-08-14T21:22:30.4373405Z * [new branch] gh/weifengpy/33/orig -> origin/gh/weifengpy/33/orig 2025-08-14T21:22:30.4373681Z * [new branch] gh/williamwen42/196/base -> origin/gh/williamwen42/196/base 2025-08-14T21:22:30.4373933Z * [new branch] gh/williamwen42/196/head -> origin/gh/williamwen42/196/head 2025-08-14T21:22:30.4374166Z * [new branch] gh/williamwen42/196/orig -> origin/gh/williamwen42/196/orig 2025-08-14T21:22:30.4374395Z * [new branch] gh/williamwen42/209/base -> origin/gh/williamwen42/209/base 2025-08-14T21:22:30.4374631Z * [new branch] gh/williamwen42/209/head -> origin/gh/williamwen42/209/head 2025-08-14T21:22:30.4374862Z * [new branch] gh/williamwen42/209/orig -> origin/gh/williamwen42/209/orig 2025-08-14T21:22:30.4375101Z * [new branch] gh/williamwen42/250/base -> origin/gh/williamwen42/250/base 2025-08-14T21:22:30.4375332Z * [new branch] gh/williamwen42/250/head -> origin/gh/williamwen42/250/head 2025-08-14T21:22:30.4375792Z * [new branch] gh/williamwen42/250/orig -> origin/gh/williamwen42/250/orig 2025-08-14T21:22:30.4377012Z * [new branch] gh/williamwen42/252/base -> origin/gh/williamwen42/252/base 2025-08-14T21:22:30.4377945Z * [new branch] gh/williamwen42/252/head -> origin/gh/williamwen42/252/head 2025-08-14T21:22:30.4378869Z * [new branch] gh/williamwen42/252/orig -> origin/gh/williamwen42/252/orig 2025-08-14T21:22:30.4388764Z * [new branch] gh/williamwen42/256/base -> origin/gh/williamwen42/256/base 2025-08-14T21:22:30.4389704Z * [new branch] gh/williamwen42/256/head -> origin/gh/williamwen42/256/head 2025-08-14T21:22:30.4390647Z * [new branch] gh/williamwen42/256/orig -> origin/gh/williamwen42/256/orig 2025-08-14T21:22:30.4391995Z * [new branch] gh/williamwen42/258/base -> origin/gh/williamwen42/258/base 2025-08-14T21:22:30.4392963Z * [new branch] gh/williamwen42/258/head -> origin/gh/williamwen42/258/head 2025-08-14T21:22:30.4398152Z * [new branch] gh/williamwen42/258/orig -> origin/gh/williamwen42/258/orig 2025-08-14T21:22:30.4398377Z * [new branch] gh/williamwen42/260/base -> origin/gh/williamwen42/260/base 2025-08-14T21:22:30.4399093Z * [new branch] gh/williamwen42/260/head -> origin/gh/williamwen42/260/head 2025-08-14T21:22:30.4402691Z * [new branch] gh/williamwen42/260/orig -> origin/gh/williamwen42/260/orig 2025-08-14T21:22:30.4402948Z * [new branch] gh/williamwen42/261/base -> origin/gh/williamwen42/261/base 2025-08-14T21:22:30.4403177Z * [new branch] gh/williamwen42/261/head -> origin/gh/williamwen42/261/head 2025-08-14T21:22:30.4403365Z * [new branch] gh/williamwen42/261/orig -> origin/gh/williamwen42/261/orig 2025-08-14T21:22:30.4404806Z * [new branch] gh/williamwen42/262/base -> origin/gh/williamwen42/262/base 2025-08-14T21:22:30.4406270Z * [new branch] gh/williamwen42/262/head -> origin/gh/williamwen42/262/head 2025-08-14T21:22:30.4407184Z * [new branch] gh/williamwen42/262/orig -> origin/gh/williamwen42/262/orig 2025-08-14T21:22:30.4414775Z * [new branch] gh/williamwen42/263/base -> origin/gh/williamwen42/263/base 2025-08-14T21:22:30.4415011Z * [new branch] gh/williamwen42/263/head -> origin/gh/williamwen42/263/head 2025-08-14T21:22:30.4415243Z * [new branch] gh/williamwen42/263/orig -> origin/gh/williamwen42/263/orig 2025-08-14T21:22:30.4415469Z * [new branch] gh/williamwen42/264/base -> origin/gh/williamwen42/264/base 2025-08-14T21:22:30.4415707Z * [new branch] gh/williamwen42/264/head -> origin/gh/williamwen42/264/head 2025-08-14T21:22:30.4415932Z * [new branch] gh/williamwen42/264/orig -> origin/gh/williamwen42/264/orig 2025-08-14T21:22:30.4416168Z * [new branch] gh/williamwen42/265/base -> origin/gh/williamwen42/265/base 2025-08-14T21:22:30.4416404Z * [new branch] gh/williamwen42/265/head -> origin/gh/williamwen42/265/head 2025-08-14T21:22:30.4416989Z * [new branch] gh/williamwen42/265/orig -> origin/gh/williamwen42/265/orig 2025-08-14T21:22:30.4418235Z * [new branch] gh/williamwen42/266/base -> origin/gh/williamwen42/266/base 2025-08-14T21:22:30.4419034Z * [new branch] gh/williamwen42/266/head -> origin/gh/williamwen42/266/head 2025-08-14T21:22:30.4419956Z * [new branch] gh/williamwen42/266/orig -> origin/gh/williamwen42/266/orig 2025-08-14T21:22:30.4421126Z * [new branch] gh/williamwen42/267/base -> origin/gh/williamwen42/267/base 2025-08-14T21:22:30.4422007Z * [new branch] gh/williamwen42/267/head -> origin/gh/williamwen42/267/head 2025-08-14T21:22:30.4427181Z * [new branch] gh/williamwen42/267/orig -> origin/gh/williamwen42/267/orig 2025-08-14T21:22:30.4428646Z * [new branch] gh/williamwen42/268/base -> origin/gh/williamwen42/268/base 2025-08-14T21:22:30.4429532Z * [new branch] gh/williamwen42/268/head -> origin/gh/williamwen42/268/head 2025-08-14T21:22:30.4430508Z * [new branch] gh/williamwen42/268/orig -> origin/gh/williamwen42/268/orig 2025-08-14T21:22:30.4431751Z * [new branch] gh/williamwen42/269/base -> origin/gh/williamwen42/269/base 2025-08-14T21:22:30.4432626Z * [new branch] gh/williamwen42/269/head -> origin/gh/williamwen42/269/head 2025-08-14T21:22:30.4433529Z * [new branch] gh/williamwen42/269/orig -> origin/gh/williamwen42/269/orig 2025-08-14T21:22:30.4435243Z * [new branch] gh/williamwen42/270/base -> origin/gh/williamwen42/270/base 2025-08-14T21:22:30.4436196Z * [new branch] gh/williamwen42/270/head -> origin/gh/williamwen42/270/head 2025-08-14T21:22:30.4437190Z * [new branch] gh/williamwen42/270/orig -> origin/gh/williamwen42/270/orig 2025-08-14T21:22:30.4441573Z * [new branch] gh/williamwen42/271/base -> origin/gh/williamwen42/271/base 2025-08-14T21:22:30.4441819Z * [new branch] gh/williamwen42/271/head -> origin/gh/williamwen42/271/head 2025-08-14T21:22:30.4442051Z * [new branch] gh/williamwen42/271/orig -> origin/gh/williamwen42/271/orig 2025-08-14T21:22:30.4442539Z * [new branch] gh/williamwen42/272/base -> origin/gh/williamwen42/272/base 2025-08-14T21:22:30.4443930Z * [new branch] gh/williamwen42/272/head -> origin/gh/williamwen42/272/head 2025-08-14T21:22:30.4444524Z * [new branch] gh/williamwen42/272/orig -> origin/gh/williamwen42/272/orig 2025-08-14T21:22:30.4445881Z * [new branch] gh/williamwen42/273/base -> origin/gh/williamwen42/273/base 2025-08-14T21:22:30.4446860Z * [new branch] gh/williamwen42/273/head -> origin/gh/williamwen42/273/head 2025-08-14T21:22:30.4447858Z * [new branch] gh/williamwen42/273/orig -> origin/gh/williamwen42/273/orig 2025-08-14T21:22:30.4449536Z * [new branch] gh/williamwen42/274/base -> origin/gh/williamwen42/274/base 2025-08-14T21:22:30.4450516Z * [new branch] gh/williamwen42/274/head -> origin/gh/williamwen42/274/head 2025-08-14T21:22:30.4451420Z * [new branch] gh/williamwen42/274/orig -> origin/gh/williamwen42/274/orig 2025-08-14T21:22:30.4460343Z * [new branch] gh/williamwen42/275/base -> origin/gh/williamwen42/275/base 2025-08-14T21:22:30.4460572Z * [new branch] gh/williamwen42/275/head -> origin/gh/williamwen42/275/head 2025-08-14T21:22:30.4460799Z * [new branch] gh/williamwen42/276/base -> origin/gh/williamwen42/276/base 2025-08-14T21:22:30.4461024Z * [new branch] gh/williamwen42/276/head -> origin/gh/williamwen42/276/head 2025-08-14T21:22:30.4461247Z * [new branch] gh/williamwen42/276/orig -> origin/gh/williamwen42/276/orig 2025-08-14T21:22:30.4462327Z * [new branch] gh/williamwen42/277/base -> origin/gh/williamwen42/277/base 2025-08-14T21:22:30.4463567Z * [new branch] gh/williamwen42/277/head -> origin/gh/williamwen42/277/head 2025-08-14T21:22:30.4464500Z * [new branch] gh/williamwen42/277/orig -> origin/gh/williamwen42/277/orig 2025-08-14T21:22:30.4465629Z * [new branch] gh/williamwen42/278/base -> origin/gh/williamwen42/278/base 2025-08-14T21:22:30.4470779Z * [new branch] gh/williamwen42/278/head -> origin/gh/williamwen42/278/head 2025-08-14T21:22:30.4470962Z * [new branch] gh/williamwen42/278/orig -> origin/gh/williamwen42/278/orig 2025-08-14T21:22:30.4471173Z * [new branch] gh/williamwen42/279/base -> origin/gh/williamwen42/279/base 2025-08-14T21:22:30.4471361Z * [new branch] gh/williamwen42/279/head -> origin/gh/williamwen42/279/head 2025-08-14T21:22:30.4471544Z * [new branch] gh/williamwen42/279/orig -> origin/gh/williamwen42/279/orig 2025-08-14T21:22:30.4473370Z * [new branch] gh/xmfan/169/base -> origin/gh/xmfan/169/base 2025-08-14T21:22:30.4474138Z * [new branch] gh/xmfan/169/head -> origin/gh/xmfan/169/head 2025-08-14T21:22:30.4475311Z * [new branch] gh/xmfan/170/base -> origin/gh/xmfan/170/base 2025-08-14T21:22:30.4476447Z * [new branch] gh/xmfan/170/head -> origin/gh/xmfan/170/head 2025-08-14T21:22:30.4477828Z * [new branch] gh/xmfan/18/base -> origin/gh/xmfan/18/base 2025-08-14T21:22:30.4478776Z * [new branch] gh/xmfan/18/head -> origin/gh/xmfan/18/head 2025-08-14T21:22:30.4480051Z * [new branch] gh/xmfan/228/base -> origin/gh/xmfan/228/base 2025-08-14T21:22:30.4481224Z * [new branch] gh/xmfan/228/head -> origin/gh/xmfan/228/head 2025-08-14T21:22:30.4486325Z * [new branch] gh/xmfan/228/orig -> origin/gh/xmfan/228/orig 2025-08-14T21:22:30.4487640Z * [new branch] gh/xmfan/229/base -> origin/gh/xmfan/229/base 2025-08-14T21:22:30.4488550Z * [new branch] gh/xmfan/229/head -> origin/gh/xmfan/229/head 2025-08-14T21:22:30.4489473Z * [new branch] gh/xmfan/229/orig -> origin/gh/xmfan/229/orig 2025-08-14T21:22:30.4490635Z * [new branch] gh/xmfan/237/base -> origin/gh/xmfan/237/base 2025-08-14T21:22:30.4491630Z * [new branch] gh/xmfan/237/head -> origin/gh/xmfan/237/head 2025-08-14T21:22:30.4492527Z * [new branch] gh/xmfan/237/orig -> origin/gh/xmfan/237/orig 2025-08-14T21:22:30.4493717Z * [new branch] gh/xmfan/244/base -> origin/gh/xmfan/244/base 2025-08-14T21:22:30.4494652Z * [new branch] gh/xmfan/244/head -> origin/gh/xmfan/244/head 2025-08-14T21:22:30.4500226Z * [new branch] gh/xmfan/244/orig -> origin/gh/xmfan/244/orig 2025-08-14T21:22:30.4500397Z * [new branch] gh/xmfan/246/base -> origin/gh/xmfan/246/base 2025-08-14T21:22:30.4500557Z * [new branch] gh/xmfan/246/head -> origin/gh/xmfan/246/head 2025-08-14T21:22:30.4500780Z * [new branch] gh/xmfan/246/orig -> origin/gh/xmfan/246/orig 2025-08-14T21:22:30.4500984Z * [new branch] gh/xmfan/253/base -> origin/gh/xmfan/253/base 2025-08-14T21:22:30.4501139Z * [new branch] gh/xmfan/253/head -> origin/gh/xmfan/253/head 2025-08-14T21:22:30.4501921Z * [new branch] gh/xmfan/253/orig -> origin/gh/xmfan/253/orig 2025-08-14T21:22:30.4503087Z * [new branch] gh/xmfan/254/base -> origin/gh/xmfan/254/base 2025-08-14T21:22:30.4504076Z * [new branch] gh/xmfan/254/head -> origin/gh/xmfan/254/head 2025-08-14T21:22:30.4504994Z * [new branch] gh/xmfan/254/orig -> origin/gh/xmfan/254/orig 2025-08-14T21:22:30.4506193Z * [new branch] gh/xmfan/260/base -> origin/gh/xmfan/260/base 2025-08-14T21:22:30.4507102Z * [new branch] gh/xmfan/260/head -> origin/gh/xmfan/260/head 2025-08-14T21:22:30.4508003Z * [new branch] gh/xmfan/260/orig -> origin/gh/xmfan/260/orig 2025-08-14T21:22:30.4509179Z * [new branch] gh/xmfan/262/base -> origin/gh/xmfan/262/base 2025-08-14T21:22:30.4510293Z * [new branch] gh/xmfan/262/head -> origin/gh/xmfan/262/head 2025-08-14T21:22:30.4519636Z * [new branch] gh/xmfan/262/orig -> origin/gh/xmfan/262/orig 2025-08-14T21:22:30.4520898Z * [new branch] gh/xmfan/263/base -> origin/gh/xmfan/263/base 2025-08-14T21:22:30.4521911Z * [new branch] gh/xmfan/263/head -> origin/gh/xmfan/263/head 2025-08-14T21:22:30.4522912Z * [new branch] gh/xmfan/263/orig -> origin/gh/xmfan/263/orig 2025-08-14T21:22:30.4524212Z * [new branch] gh/xmfan/264/base -> origin/gh/xmfan/264/base 2025-08-14T21:22:30.4533043Z * [new branch] gh/xmfan/264/head -> origin/gh/xmfan/264/head 2025-08-14T21:22:30.4533275Z * [new branch] gh/xmfan/264/orig -> origin/gh/xmfan/264/orig 2025-08-14T21:22:30.4533474Z * [new branch] gh/xmfan/268/base -> origin/gh/xmfan/268/base 2025-08-14T21:22:30.4533634Z * [new branch] gh/xmfan/268/head -> origin/gh/xmfan/268/head 2025-08-14T21:22:30.4533799Z * [new branch] gh/xmfan/268/orig -> origin/gh/xmfan/268/orig 2025-08-14T21:22:30.4533951Z * [new branch] gh/xmfan/269/base -> origin/gh/xmfan/269/base 2025-08-14T21:22:30.4535235Z * [new branch] gh/xmfan/269/head -> origin/gh/xmfan/269/head 2025-08-14T21:22:30.4536241Z * [new branch] gh/xmfan/269/orig -> origin/gh/xmfan/269/orig 2025-08-14T21:22:30.4537512Z * [new branch] gh/xmfan/270/base -> origin/gh/xmfan/270/base 2025-08-14T21:22:30.4541240Z * [new branch] gh/xmfan/270/head -> origin/gh/xmfan/270/head 2025-08-14T21:22:30.4541399Z * [new branch] gh/xmfan/270/orig -> origin/gh/xmfan/270/orig 2025-08-14T21:22:30.4541626Z * [new branch] gh/xmfan/271/base -> origin/gh/xmfan/271/base 2025-08-14T21:22:30.4542688Z * [new branch] gh/xmfan/271/head -> origin/gh/xmfan/271/head 2025-08-14T21:22:30.4543577Z * [new branch] gh/xmfan/271/orig -> origin/gh/xmfan/271/orig 2025-08-14T21:22:30.4544858Z * [new branch] gh/xmfan/272/base -> origin/gh/xmfan/272/base 2025-08-14T21:22:30.4545745Z * [new branch] gh/xmfan/272/head -> origin/gh/xmfan/272/head 2025-08-14T21:22:30.4546723Z * [new branch] gh/xmfan/272/orig -> origin/gh/xmfan/272/orig 2025-08-14T21:22:30.4548293Z * [new branch] gh/xmfan/273/base -> origin/gh/xmfan/273/base 2025-08-14T21:22:30.4549724Z * [new branch] gh/xmfan/273/head -> origin/gh/xmfan/273/head 2025-08-14T21:22:30.4550714Z * [new branch] gh/xmfan/273/orig -> origin/gh/xmfan/273/orig 2025-08-14T21:22:30.4552333Z * [new branch] gh/xmfan/274/base -> origin/gh/xmfan/274/base 2025-08-14T21:22:30.4553256Z * [new branch] gh/xmfan/274/head -> origin/gh/xmfan/274/head 2025-08-14T21:22:30.4561945Z * [new branch] gh/xmfan/274/orig -> origin/gh/xmfan/274/orig 2025-08-14T21:22:30.4562157Z * [new branch] gh/xmfan/275/base -> origin/gh/xmfan/275/base 2025-08-14T21:22:30.4563031Z * [new branch] gh/xmfan/275/head -> origin/gh/xmfan/275/head 2025-08-14T21:22:30.4563858Z * [new branch] gh/xmfan/275/orig -> origin/gh/xmfan/275/orig 2025-08-14T21:22:30.4565163Z * [new branch] gh/xmfan/276/base -> origin/gh/xmfan/276/base 2025-08-14T21:22:30.4566090Z * [new branch] gh/xmfan/276/head -> origin/gh/xmfan/276/head 2025-08-14T21:22:30.4567325Z * [new branch] gh/xmfan/276/orig -> origin/gh/xmfan/276/orig 2025-08-14T21:22:30.4572056Z * [new branch] gh/xmfan/277/base -> origin/gh/xmfan/277/base 2025-08-14T21:22:30.4572242Z * [new branch] gh/xmfan/277/head -> origin/gh/xmfan/277/head 2025-08-14T21:22:30.4572397Z * [new branch] gh/xmfan/277/orig -> origin/gh/xmfan/277/orig 2025-08-14T21:22:30.4572590Z * [new branch] gh/xuanzhang816/12/base -> origin/gh/xuanzhang816/12/base 2025-08-14T21:22:30.4573049Z * [new branch] gh/xuanzhang816/12/head -> origin/gh/xuanzhang816/12/head 2025-08-14T21:22:30.4574011Z * [new branch] gh/xuanzhang816/12/orig -> origin/gh/xuanzhang816/12/orig 2025-08-14T21:22:30.4575637Z * [new branch] gh/xuanzhang816/14/base -> origin/gh/xuanzhang816/14/base 2025-08-14T21:22:30.4576541Z * [new branch] gh/xuanzhang816/14/head -> origin/gh/xuanzhang816/14/head 2025-08-14T21:22:30.4577481Z * [new branch] gh/xuanzhang816/14/orig -> origin/gh/xuanzhang816/14/orig 2025-08-14T21:22:30.4578711Z * [new branch] gh/xuanzhang816/18/base -> origin/gh/xuanzhang816/18/base 2025-08-14T21:22:30.4579867Z * [new branch] gh/xuanzhang816/18/head -> origin/gh/xuanzhang816/18/head 2025-08-14T21:22:30.4580796Z * [new branch] gh/xuanzhang816/18/orig -> origin/gh/xuanzhang816/18/orig 2025-08-14T21:22:30.4582203Z * [new branch] gh/xuanzhang816/19/base -> origin/gh/xuanzhang816/19/base 2025-08-14T21:22:30.4587433Z * [new branch] gh/xuanzhang816/19/head -> origin/gh/xuanzhang816/19/head 2025-08-14T21:22:30.4588316Z * [new branch] gh/xuanzhang816/19/orig -> origin/gh/xuanzhang816/19/orig 2025-08-14T21:22:30.4589581Z * [new branch] gh/xuanzhang816/20/base -> origin/gh/xuanzhang816/20/base 2025-08-14T21:22:30.4590880Z * [new branch] gh/xuanzhang816/20/head -> origin/gh/xuanzhang816/20/head 2025-08-14T21:22:30.4592160Z * [new branch] gh/xuanzhang816/20/orig -> origin/gh/xuanzhang816/20/orig 2025-08-14T21:22:30.4593434Z * [new branch] gh/xuanzhang816/21/base -> origin/gh/xuanzhang816/21/base 2025-08-14T21:22:30.4594785Z * [new branch] gh/xuanzhang816/21/head -> origin/gh/xuanzhang816/21/head 2025-08-14T21:22:30.4595728Z * [new branch] gh/xuanzhang816/21/orig -> origin/gh/xuanzhang816/21/orig 2025-08-14T21:22:30.4601380Z * [new branch] gh/xuanzhang816/22/base -> origin/gh/xuanzhang816/22/base 2025-08-14T21:22:30.4601671Z * [new branch] gh/xuanzhang816/22/head -> origin/gh/xuanzhang816/22/head 2025-08-14T21:22:30.4601853Z * [new branch] gh/xuanzhang816/22/orig -> origin/gh/xuanzhang816/22/orig 2025-08-14T21:22:30.4602032Z * [new branch] gh/xuanzhang816/23/base -> origin/gh/xuanzhang816/23/base 2025-08-14T21:22:30.4602203Z * [new branch] gh/xuanzhang816/23/head -> origin/gh/xuanzhang816/23/head 2025-08-14T21:22:30.4602381Z * [new branch] gh/xuanzhang816/23/orig -> origin/gh/xuanzhang816/23/orig 2025-08-14T21:22:30.4605451Z * [new branch] gh/xuanzhang816/24/base -> origin/gh/xuanzhang816/24/base 2025-08-14T21:22:30.4605692Z * [new branch] gh/xuanzhang816/24/head -> origin/gh/xuanzhang816/24/head 2025-08-14T21:22:30.4605881Z * [new branch] gh/xuanzhang816/24/orig -> origin/gh/xuanzhang816/24/orig 2025-08-14T21:22:30.4607363Z * [new branch] gh/yanbing-j/11/base -> origin/gh/yanbing-j/11/base 2025-08-14T21:22:30.4608260Z * [new branch] gh/yanbing-j/11/head -> origin/gh/yanbing-j/11/head 2025-08-14T21:22:30.4609296Z * [new branch] gh/yanbing-j/11/orig -> origin/gh/yanbing-j/11/orig 2025-08-14T21:22:30.4610513Z * [new branch] gh/yanbing-j/12/base -> origin/gh/yanbing-j/12/base 2025-08-14T21:22:30.4619913Z * [new branch] gh/yanbing-j/12/head -> origin/gh/yanbing-j/12/head 2025-08-14T21:22:30.4620113Z * [new branch] gh/yanbing-j/12/orig -> origin/gh/yanbing-j/12/orig 2025-08-14T21:22:30.4621116Z * [new branch] gh/yanbing-j/13/base -> origin/gh/yanbing-j/13/base 2025-08-14T21:22:30.4621319Z * [new branch] gh/yanbing-j/13/head -> origin/gh/yanbing-j/13/head 2025-08-14T21:22:30.4621517Z * [new branch] gh/yanbing-j/13/orig -> origin/gh/yanbing-j/13/orig 2025-08-14T21:22:30.4621741Z * [new branch] gh/yanbing-j/14/base -> origin/gh/yanbing-j/14/base 2025-08-14T21:22:30.4622039Z * [new branch] gh/yanbing-j/14/head -> origin/gh/yanbing-j/14/head 2025-08-14T21:22:30.4622904Z * [new branch] gh/yanbing-j/14/orig -> origin/gh/yanbing-j/14/orig 2025-08-14T21:22:30.4624137Z * [new branch] gh/yanbing-j/15/base -> origin/gh/yanbing-j/15/base 2025-08-14T21:22:30.4625055Z * [new branch] gh/yanbing-j/15/head -> origin/gh/yanbing-j/15/head 2025-08-14T21:22:30.4630084Z * [new branch] gh/yanbing-j/15/orig -> origin/gh/yanbing-j/15/orig 2025-08-14T21:22:30.4630244Z * [new branch] gh/yanbing-j/18/base -> origin/gh/yanbing-j/18/base 2025-08-14T21:22:30.4630411Z * [new branch] gh/yanbing-j/18/head -> origin/gh/yanbing-j/18/head 2025-08-14T21:22:30.4630569Z * [new branch] gh/yanbing-j/18/orig -> origin/gh/yanbing-j/18/orig 2025-08-14T21:22:30.4631006Z * [new branch] gh/yanbing-j/19/base -> origin/gh/yanbing-j/19/base 2025-08-14T21:22:30.4631935Z * [new branch] gh/yanbing-j/19/head -> origin/gh/yanbing-j/19/head 2025-08-14T21:22:30.4632839Z * [new branch] gh/yanbing-j/19/orig -> origin/gh/yanbing-j/19/orig 2025-08-14T21:22:30.4634056Z * [new branch] gh/yanbing-j/20/base -> origin/gh/yanbing-j/20/base 2025-08-14T21:22:30.4634947Z * [new branch] gh/yanbing-j/20/head -> origin/gh/yanbing-j/20/head 2025-08-14T21:22:30.4635935Z * [new branch] gh/yanbing-j/20/orig -> origin/gh/yanbing-j/20/orig 2025-08-14T21:22:30.4637229Z * [new branch] gh/yanbing-j/21/base -> origin/gh/yanbing-j/21/base 2025-08-14T21:22:30.4638181Z * [new branch] gh/yanbing-j/21/head -> origin/gh/yanbing-j/21/head 2025-08-14T21:22:30.4639386Z * [new branch] gh/yanbing-j/22/base -> origin/gh/yanbing-j/22/base 2025-08-14T21:22:30.4644606Z * [new branch] gh/yanbing-j/22/head -> origin/gh/yanbing-j/22/head 2025-08-14T21:22:30.4645610Z * [new branch] gh/yanbing-j/22/orig -> origin/gh/yanbing-j/22/orig 2025-08-14T21:22:30.4646874Z * [new branch] gh/yanbing-j/23/base -> origin/gh/yanbing-j/23/base 2025-08-14T21:22:30.4647790Z * [new branch] gh/yanbing-j/23/head -> origin/gh/yanbing-j/23/head 2025-08-14T21:22:30.4649024Z * [new branch] gh/yanbing-j/23/orig -> origin/gh/yanbing-j/23/orig 2025-08-14T21:22:30.4650387Z * [new branch] gh/yanbing-j/24/base -> origin/gh/yanbing-j/24/base 2025-08-14T21:22:30.4651353Z * [new branch] gh/yanbing-j/24/head -> origin/gh/yanbing-j/24/head 2025-08-14T21:22:30.4652250Z * [new branch] gh/yanbing-j/24/orig -> origin/gh/yanbing-j/24/orig 2025-08-14T21:22:30.4653614Z * [new branch] gh/yanbing-j/25/base -> origin/gh/yanbing-j/25/base 2025-08-14T21:22:30.4654497Z * [new branch] gh/yanbing-j/25/head -> origin/gh/yanbing-j/25/head 2025-08-14T21:22:30.4659409Z * [new branch] gh/yanbing-j/25/orig -> origin/gh/yanbing-j/25/orig 2025-08-14T21:22:30.4659628Z * [new branch] gh/yanbing-j/26/base -> origin/gh/yanbing-j/26/base 2025-08-14T21:22:30.4659841Z * [new branch] gh/yanbing-j/26/head -> origin/gh/yanbing-j/26/head 2025-08-14T21:22:30.4660003Z * [new branch] gh/yanbing-j/26/orig -> origin/gh/yanbing-j/26/orig 2025-08-14T21:22:30.4660270Z * [new branch] gh/yanbing-j/36/base -> origin/gh/yanbing-j/36/base 2025-08-14T21:22:30.4663537Z * [new branch] gh/yanbing-j/36/head -> origin/gh/yanbing-j/36/head 2025-08-14T21:22:30.4663765Z * [new branch] gh/yanbing-j/36/orig -> origin/gh/yanbing-j/36/orig 2025-08-14T21:22:30.4663979Z * [new branch] gh/yanbing-j/37/base -> origin/gh/yanbing-j/37/base 2025-08-14T21:22:30.4664343Z * [new branch] gh/yanbing-j/37/head -> origin/gh/yanbing-j/37/head 2025-08-14T21:22:30.4665284Z * [new branch] gh/yanbing-j/37/orig -> origin/gh/yanbing-j/37/orig 2025-08-14T21:22:30.4666935Z * [new branch] gh/yanbing-j/39/base -> origin/gh/yanbing-j/39/base 2025-08-14T21:22:30.4667844Z * [new branch] gh/yanbing-j/39/head -> origin/gh/yanbing-j/39/head 2025-08-14T21:22:30.4668759Z * [new branch] gh/yanbing-j/39/orig -> origin/gh/yanbing-j/39/orig 2025-08-14T21:22:30.4678759Z * [new branch] gh/yangw-dev/1/base -> origin/gh/yangw-dev/1/base 2025-08-14T21:22:30.4680103Z * [new branch] gh/yangw-dev/10/base -> origin/gh/yangw-dev/10/base 2025-08-14T21:22:30.4681021Z * [new branch] gh/yangw-dev/10/head -> origin/gh/yangw-dev/10/head 2025-08-14T21:22:30.4682018Z * [new branch] gh/yangw-dev/10/orig -> origin/gh/yangw-dev/10/orig 2025-08-14T21:22:30.4683539Z * [new branch] gh/yangw-dev/11/base -> origin/gh/yangw-dev/11/base 2025-08-14T21:22:30.4692341Z * [new branch] gh/yangw-dev/11/head -> origin/gh/yangw-dev/11/head 2025-08-14T21:22:30.4692545Z * [new branch] gh/yangw-dev/11/orig -> origin/gh/yangw-dev/11/orig 2025-08-14T21:22:30.4692764Z * [new branch] gh/yangw-dev/12/base -> origin/gh/yangw-dev/12/base 2025-08-14T21:22:30.4692943Z * [new branch] gh/yangw-dev/12/head -> origin/gh/yangw-dev/12/head 2025-08-14T21:22:30.4693109Z * [new branch] gh/yangw-dev/12/orig -> origin/gh/yangw-dev/12/orig 2025-08-14T21:22:30.4693268Z * [new branch] gh/yangw-dev/13/base -> origin/gh/yangw-dev/13/base 2025-08-14T21:22:30.4693491Z * [new branch] gh/yangw-dev/13/head -> origin/gh/yangw-dev/13/head 2025-08-14T21:22:30.4694566Z * [new branch] gh/yangw-dev/13/orig -> origin/gh/yangw-dev/13/orig 2025-08-14T21:22:30.4695742Z * [new branch] gh/yangw-dev/14/base -> origin/gh/yangw-dev/14/base 2025-08-14T21:22:30.4696666Z * [new branch] gh/yangw-dev/14/head -> origin/gh/yangw-dev/14/head 2025-08-14T21:22:30.4697956Z * [new branch] gh/yangw-dev/14/orig -> origin/gh/yangw-dev/14/orig 2025-08-14T21:22:30.4704985Z * [new branch] gh/yangw-dev/15/base -> origin/gh/yangw-dev/15/base 2025-08-14T21:22:30.4705201Z * [new branch] gh/yangw-dev/15/head -> origin/gh/yangw-dev/15/head 2025-08-14T21:22:30.4705416Z * [new branch] gh/yangw-dev/15/orig -> origin/gh/yangw-dev/15/orig 2025-08-14T21:22:30.4705582Z * [new branch] gh/yangw-dev/16/base -> origin/gh/yangw-dev/16/base 2025-08-14T21:22:30.4705748Z * [new branch] gh/yangw-dev/16/head -> origin/gh/yangw-dev/16/head 2025-08-14T21:22:30.4705921Z * [new branch] gh/yangw-dev/16/orig -> origin/gh/yangw-dev/16/orig 2025-08-14T21:22:30.4706084Z * [new branch] gh/yangw-dev/17/base -> origin/gh/yangw-dev/17/base 2025-08-14T21:22:30.4706998Z * [new branch] gh/yangw-dev/17/head -> origin/gh/yangw-dev/17/head 2025-08-14T21:22:30.4707843Z * [new branch] gh/yangw-dev/17/orig -> origin/gh/yangw-dev/17/orig 2025-08-14T21:22:30.4709047Z * [new branch] gh/yangw-dev/18/base -> origin/gh/yangw-dev/18/base 2025-08-14T21:22:30.4709929Z * [new branch] gh/yangw-dev/18/head -> origin/gh/yangw-dev/18/head 2025-08-14T21:22:30.4710819Z * [new branch] gh/yangw-dev/18/orig -> origin/gh/yangw-dev/18/orig 2025-08-14T21:22:30.4712019Z * [new branch] gh/yangw-dev/19/base -> origin/gh/yangw-dev/19/base 2025-08-14T21:22:30.4717171Z * [new branch] gh/yangw-dev/19/head -> origin/gh/yangw-dev/19/head 2025-08-14T21:22:30.4721456Z * [new branch] gh/yangw-dev/19/orig -> origin/gh/yangw-dev/19/orig 2025-08-14T21:22:30.4721674Z * [new branch] gh/yangw-dev/2/base -> origin/gh/yangw-dev/2/base 2025-08-14T21:22:30.4721881Z * [new branch] gh/yangw-dev/2/head -> origin/gh/yangw-dev/2/head 2025-08-14T21:22:30.4722097Z * [new branch] gh/yangw-dev/3/base -> origin/gh/yangw-dev/3/base 2025-08-14T21:22:30.4722607Z * [new branch] gh/yangw-dev/3/head -> origin/gh/yangw-dev/3/head 2025-08-14T21:22:30.4723856Z * [new branch] gh/yangw-dev/4/base -> origin/gh/yangw-dev/4/base 2025-08-14T21:22:30.4724717Z * [new branch] gh/yangw-dev/4/head -> origin/gh/yangw-dev/4/head 2025-08-14T21:22:30.4725868Z * [new branch] gh/yangw-dev/5/base -> origin/gh/yangw-dev/5/base 2025-08-14T21:22:30.4726762Z * [new branch] gh/yangw-dev/5/head -> origin/gh/yangw-dev/5/head 2025-08-14T21:22:30.4731591Z * [new branch] gh/yangw-dev/6/base -> origin/gh/yangw-dev/6/base 2025-08-14T21:22:30.4731792Z * [new branch] gh/yangw-dev/6/head -> origin/gh/yangw-dev/6/head 2025-08-14T21:22:30.4731954Z * [new branch] gh/yangw-dev/7/base -> origin/gh/yangw-dev/7/base 2025-08-14T21:22:30.4732120Z * [new branch] gh/yangw-dev/7/head -> origin/gh/yangw-dev/7/head 2025-08-14T21:22:30.4732339Z * [new branch] gh/yangw-dev/8/base -> origin/gh/yangw-dev/8/base 2025-08-14T21:22:30.4735819Z * [new branch] gh/yangw-dev/8/head -> origin/gh/yangw-dev/8/head 2025-08-14T21:22:30.4736041Z * [new branch] gh/yangw-dev/8/orig -> origin/gh/yangw-dev/8/orig 2025-08-14T21:22:30.4736245Z * [new branch] gh/yangw-dev/9/base -> origin/gh/yangw-dev/9/base 2025-08-14T21:22:30.4736510Z * [new branch] gh/yangw-dev/9/head -> origin/gh/yangw-dev/9/head 2025-08-14T21:22:30.4737044Z * [new branch] gh/yangw-dev/9/orig -> origin/gh/yangw-dev/9/orig 2025-08-14T21:22:30.4738598Z * [new branch] gh/ydwu4/233/base -> origin/gh/ydwu4/233/base 2025-08-14T21:22:30.4739515Z * [new branch] gh/ydwu4/233/head -> origin/gh/ydwu4/233/head 2025-08-14T21:22:30.4740473Z * [new branch] gh/ydwu4/233/orig -> origin/gh/ydwu4/233/orig 2025-08-14T21:22:30.4750619Z * [new branch] gh/ydwu4/246/base -> origin/gh/ydwu4/246/base 2025-08-14T21:22:30.4750812Z * [new branch] gh/ydwu4/246/head -> origin/gh/ydwu4/246/head 2025-08-14T21:22:30.4751010Z * [new branch] gh/ydwu4/246/orig -> origin/gh/ydwu4/246/orig 2025-08-14T21:22:30.4751204Z * [new branch] gh/ydwu4/253/base -> origin/gh/ydwu4/253/base 2025-08-14T21:22:30.4751378Z * [new branch] gh/ydwu4/253/head -> origin/gh/ydwu4/253/head 2025-08-14T21:22:30.4751828Z * [new branch] gh/ydwu4/253/orig -> origin/gh/ydwu4/253/orig 2025-08-14T21:22:30.4753132Z * [new branch] gh/ydwu4/255/base -> origin/gh/ydwu4/255/base 2025-08-14T21:22:30.4754049Z * [new branch] gh/ydwu4/255/head -> origin/gh/ydwu4/255/head 2025-08-14T21:22:30.4754925Z * [new branch] gh/ydwu4/255/orig -> origin/gh/ydwu4/255/orig 2025-08-14T21:22:30.4760526Z * [new branch] gh/ydwu4/259/base -> origin/gh/ydwu4/259/base 2025-08-14T21:22:30.4760719Z * [new branch] gh/ydwu4/259/head -> origin/gh/ydwu4/259/head 2025-08-14T21:22:30.4760888Z * [new branch] gh/ydwu4/259/orig -> origin/gh/ydwu4/259/orig 2025-08-14T21:22:30.4761088Z * [new branch] gh/ydwu4/262/base -> origin/gh/ydwu4/262/base 2025-08-14T21:22:30.4761265Z * [new branch] gh/ydwu4/262/head -> origin/gh/ydwu4/262/head 2025-08-14T21:22:30.4762200Z * [new branch] gh/ydwu4/262/orig -> origin/gh/ydwu4/262/orig 2025-08-14T21:22:30.4763483Z * [new branch] gh/ydwu4/263/base -> origin/gh/ydwu4/263/base 2025-08-14T21:22:30.4764424Z * [new branch] gh/ydwu4/263/head -> origin/gh/ydwu4/263/head 2025-08-14T21:22:30.4765300Z * [new branch] gh/ydwu4/263/orig -> origin/gh/ydwu4/263/orig 2025-08-14T21:22:30.4766686Z * [new branch] gh/ydwu4/269/base -> origin/gh/ydwu4/269/base 2025-08-14T21:22:30.4767526Z * [new branch] gh/ydwu4/269/head -> origin/gh/ydwu4/269/head 2025-08-14T21:22:30.4768497Z * [new branch] gh/ydwu4/269/orig -> origin/gh/ydwu4/269/orig 2025-08-14T21:22:30.4769768Z * [new branch] gh/ydwu4/270/base -> origin/gh/ydwu4/270/base 2025-08-14T21:22:30.4777016Z * [new branch] gh/ydwu4/270/head -> origin/gh/ydwu4/270/head 2025-08-14T21:22:30.4777194Z * [new branch] gh/ydwu4/270/orig -> origin/gh/ydwu4/270/orig 2025-08-14T21:22:30.4777354Z * [new branch] gh/ydwu4/272/base -> origin/gh/ydwu4/272/base 2025-08-14T21:22:30.4778327Z * [new branch] gh/ydwu4/272/head -> origin/gh/ydwu4/272/head 2025-08-14T21:22:30.4779557Z * [new branch] gh/ydwu4/272/orig -> origin/gh/ydwu4/272/orig 2025-08-14T21:22:30.4780716Z * [new branch] gh/ydwu4/275/base -> origin/gh/ydwu4/275/base 2025-08-14T21:22:30.4781684Z * [new branch] gh/ydwu4/275/head -> origin/gh/ydwu4/275/head 2025-08-14T21:22:30.4782591Z * [new branch] gh/ydwu4/275/orig -> origin/gh/ydwu4/275/orig 2025-08-14T21:22:30.4783740Z * [new branch] gh/ydwu4/276/base -> origin/gh/ydwu4/276/base 2025-08-14T21:22:30.4784719Z * [new branch] gh/ydwu4/276/head -> origin/gh/ydwu4/276/head 2025-08-14T21:22:30.4791774Z * [new branch] gh/ydwu4/276/orig -> origin/gh/ydwu4/276/orig 2025-08-14T21:22:30.4791980Z * [new branch] gh/ydwu4/277/base -> origin/gh/ydwu4/277/base 2025-08-14T21:22:30.4792169Z * [new branch] gh/ydwu4/277/head -> origin/gh/ydwu4/277/head 2025-08-14T21:22:30.4792364Z * [new branch] gh/ydwu4/277/orig -> origin/gh/ydwu4/277/orig 2025-08-14T21:22:30.4792550Z * [new branch] gh/ydwu4/278/base -> origin/gh/ydwu4/278/base 2025-08-14T21:22:30.4792737Z * [new branch] gh/ydwu4/278/head -> origin/gh/ydwu4/278/head 2025-08-14T21:22:30.4792932Z * [new branch] gh/ydwu4/278/orig -> origin/gh/ydwu4/278/orig 2025-08-14T21:22:30.4793481Z * [new branch] gh/ydwu4/279/base -> origin/gh/ydwu4/279/base 2025-08-14T21:22:30.4794586Z * [new branch] gh/ydwu4/279/head -> origin/gh/ydwu4/279/head 2025-08-14T21:22:30.4795476Z * [new branch] gh/ydwu4/279/orig -> origin/gh/ydwu4/279/orig 2025-08-14T21:22:30.4796932Z * [new branch] gh/ydwu4/280/base -> origin/gh/ydwu4/280/base 2025-08-14T21:22:30.4797837Z * [new branch] gh/ydwu4/280/head -> origin/gh/ydwu4/280/head 2025-08-14T21:22:30.4798742Z * [new branch] gh/ydwu4/280/orig -> origin/gh/ydwu4/280/orig 2025-08-14T21:22:30.4805040Z * [new branch] gh/ydwu4/281/base -> origin/gh/ydwu4/281/base 2025-08-14T21:22:30.4806123Z * [new branch] gh/ydwu4/281/head -> origin/gh/ydwu4/281/head 2025-08-14T21:22:30.4807114Z * [new branch] gh/ydwu4/281/orig -> origin/gh/ydwu4/281/orig 2025-08-14T21:22:30.4808282Z * [new branch] gh/ydwu4/282/base -> origin/gh/ydwu4/282/base 2025-08-14T21:22:30.4809258Z * [new branch] gh/ydwu4/282/head -> origin/gh/ydwu4/282/head 2025-08-14T21:22:30.4810242Z * [new branch] gh/ydwu4/282/orig -> origin/gh/ydwu4/282/orig 2025-08-14T21:22:30.4811421Z * [new branch] gh/ydwu4/283/base -> origin/gh/ydwu4/283/base 2025-08-14T21:22:30.4812343Z * [new branch] gh/ydwu4/283/head -> origin/gh/ydwu4/283/head 2025-08-14T21:22:30.4813287Z * [new branch] gh/ydwu4/283/orig -> origin/gh/ydwu4/283/orig 2025-08-14T21:22:30.4822791Z * [new branch] gh/ydwu4/284/base -> origin/gh/ydwu4/284/base 2025-08-14T21:22:30.4822990Z * [new branch] gh/ydwu4/284/head -> origin/gh/ydwu4/284/head 2025-08-14T21:22:30.4823181Z * [new branch] gh/ydwu4/284/orig -> origin/gh/ydwu4/284/orig 2025-08-14T21:22:30.4823384Z * [new branch] gh/ydwu4/285/base -> origin/gh/ydwu4/285/base 2025-08-14T21:22:30.4823579Z * [new branch] gh/ydwu4/285/head -> origin/gh/ydwu4/285/head 2025-08-14T21:22:30.4823787Z * [new branch] gh/ydwu4/285/orig -> origin/gh/ydwu4/285/orig 2025-08-14T21:22:30.4823983Z * [new branch] gh/ydwu4/286/base -> origin/gh/ydwu4/286/base 2025-08-14T21:22:30.4824139Z * [new branch] gh/ydwu4/286/head -> origin/gh/ydwu4/286/head 2025-08-14T21:22:30.4824301Z * [new branch] gh/ydwu4/286/orig -> origin/gh/ydwu4/286/orig 2025-08-14T21:22:30.4824455Z * [new branch] gh/ydwu4/287/base -> origin/gh/ydwu4/287/base 2025-08-14T21:22:30.4824856Z * [new branch] gh/ydwu4/287/head -> origin/gh/ydwu4/287/head 2025-08-14T21:22:30.4825818Z * [new branch] gh/ydwu4/287/orig -> origin/gh/ydwu4/287/orig 2025-08-14T21:22:30.4827093Z * [new branch] gh/ydwu4/288/base -> origin/gh/ydwu4/288/base 2025-08-14T21:22:30.4828067Z * [new branch] gh/ydwu4/288/head -> origin/gh/ydwu4/288/head 2025-08-14T21:22:30.4833049Z * [new branch] gh/ydwu4/288/orig -> origin/gh/ydwu4/288/orig 2025-08-14T21:22:30.4839174Z * [new branch] gh/ydwu4/289/base -> origin/gh/ydwu4/289/base 2025-08-14T21:22:30.4840173Z * [new branch] gh/ydwu4/289/head -> origin/gh/ydwu4/289/head 2025-08-14T21:22:30.4841082Z * [new branch] gh/ydwu4/289/orig -> origin/gh/ydwu4/289/orig 2025-08-14T21:22:30.4842456Z * [new branch] gh/ydwu4/290/base -> origin/gh/ydwu4/290/base 2025-08-14T21:22:30.4847674Z * [new branch] gh/ydwu4/290/head -> origin/gh/ydwu4/290/head 2025-08-14T21:22:30.4847877Z * [new branch] gh/ydwu4/290/orig -> origin/gh/ydwu4/290/orig 2025-08-14T21:22:30.4848136Z * [new branch] gh/ydwu4/291/base -> origin/gh/ydwu4/291/base 2025-08-14T21:22:30.4852278Z * [new branch] gh/ydwu4/291/head -> origin/gh/ydwu4/291/head 2025-08-14T21:22:30.4852500Z * [new branch] gh/ydwu4/291/orig -> origin/gh/ydwu4/291/orig 2025-08-14T21:22:30.4852695Z * [new branch] gh/ydwu4/292/base -> origin/gh/ydwu4/292/base 2025-08-14T21:22:30.4853200Z * [new branch] gh/ydwu4/292/head -> origin/gh/ydwu4/292/head 2025-08-14T21:22:30.4854167Z * [new branch] gh/ydwu4/292/orig -> origin/gh/ydwu4/292/orig 2025-08-14T21:22:30.4855369Z * [new branch] gh/ydwu4/293/base -> origin/gh/ydwu4/293/base 2025-08-14T21:22:30.4856326Z * [new branch] gh/ydwu4/293/head -> origin/gh/ydwu4/293/head 2025-08-14T21:22:30.4857178Z * [new branch] gh/ydwu4/293/orig -> origin/gh/ydwu4/293/orig 2025-08-14T21:22:30.4864158Z * [new branch] gh/ydwu4/294/base -> origin/gh/ydwu4/294/base 2025-08-14T21:22:30.4864355Z * [new branch] gh/ydwu4/294/head -> origin/gh/ydwu4/294/head 2025-08-14T21:22:30.4864724Z * [new branch] gh/ydwu4/294/orig -> origin/gh/ydwu4/294/orig 2025-08-14T21:22:30.4864913Z * [new branch] gh/ydwu4/295/base -> origin/gh/ydwu4/295/base 2025-08-14T21:22:30.4865102Z * [new branch] gh/ydwu4/295/head -> origin/gh/ydwu4/295/head 2025-08-14T21:22:30.4865299Z * [new branch] gh/ydwu4/295/orig -> origin/gh/ydwu4/295/orig 2025-08-14T21:22:30.4865528Z * [new branch] gh/ydwu4/296/base -> origin/gh/ydwu4/296/base 2025-08-14T21:22:30.4865859Z * [new branch] gh/ydwu4/296/head -> origin/gh/ydwu4/296/head 2025-08-14T21:22:30.4866841Z * [new branch] gh/ydwu4/296/orig -> origin/gh/ydwu4/296/orig 2025-08-14T21:22:30.4868137Z * [new branch] gh/ydwu4/297/base -> origin/gh/ydwu4/297/base 2025-08-14T21:22:30.4868998Z * [new branch] gh/ydwu4/297/head -> origin/gh/ydwu4/297/head 2025-08-14T21:22:30.4869881Z * [new branch] gh/ydwu4/297/orig -> origin/gh/ydwu4/297/orig 2025-08-14T21:22:30.4871043Z * [new branch] gh/ydwu4/298/base -> origin/gh/ydwu4/298/base 2025-08-14T21:22:30.4871953Z * [new branch] gh/ydwu4/298/head -> origin/gh/ydwu4/298/head 2025-08-14T21:22:30.4877273Z * [new branch] gh/ydwu4/298/orig -> origin/gh/ydwu4/298/orig 2025-08-14T21:22:30.4878745Z * [new branch] gh/ydwu4/299/base -> origin/gh/ydwu4/299/base 2025-08-14T21:22:30.4879660Z * [new branch] gh/ydwu4/299/head -> origin/gh/ydwu4/299/head 2025-08-14T21:22:30.4880591Z * [new branch] gh/ydwu4/299/orig -> origin/gh/ydwu4/299/orig 2025-08-14T21:22:30.4882956Z * [new branch] gh/ydwu4/300/base -> origin/gh/ydwu4/300/base 2025-08-14T21:22:30.4884610Z * [new branch] gh/ydwu4/300/head -> origin/gh/ydwu4/300/head 2025-08-14T21:22:30.4885584Z * [new branch] gh/ydwu4/300/orig -> origin/gh/ydwu4/300/orig 2025-08-14T21:22:30.4893430Z * [new branch] gh/ydwu4/301/base -> origin/gh/ydwu4/301/base 2025-08-14T21:22:30.4893633Z * [new branch] gh/ydwu4/301/head -> origin/gh/ydwu4/301/head 2025-08-14T21:22:30.4893825Z * [new branch] gh/ydwu4/301/orig -> origin/gh/ydwu4/301/orig 2025-08-14T21:22:30.4894031Z * [new branch] gh/ydwu4/302/base -> origin/gh/ydwu4/302/base 2025-08-14T21:22:30.4894227Z * [new branch] gh/ydwu4/302/head -> origin/gh/ydwu4/302/head 2025-08-14T21:22:30.4894434Z * [new branch] gh/ydwu4/302/orig -> origin/gh/ydwu4/302/orig 2025-08-14T21:22:30.4894629Z * [new branch] gh/ydwu4/303/base -> origin/gh/ydwu4/303/base 2025-08-14T21:22:30.4894868Z * [new branch] gh/ydwu4/303/head -> origin/gh/ydwu4/303/head 2025-08-14T21:22:30.4895514Z * [new branch] gh/ydwu4/303/orig -> origin/gh/ydwu4/303/orig 2025-08-14T21:22:30.4896719Z * [new branch] gh/ydwu4/304/base -> origin/gh/ydwu4/304/base 2025-08-14T21:22:30.4897682Z * [new branch] gh/ydwu4/304/head -> origin/gh/ydwu4/304/head 2025-08-14T21:22:30.4898605Z * [new branch] gh/ydwu4/304/orig -> origin/gh/ydwu4/304/orig 2025-08-14T21:22:30.4899972Z * [new branch] gh/ydwu4/305/base -> origin/gh/ydwu4/305/base 2025-08-14T21:22:30.4901017Z * [new branch] gh/ydwu4/305/head -> origin/gh/ydwu4/305/head 2025-08-14T21:22:30.4909891Z * [new branch] gh/ydwu4/305/orig -> origin/gh/ydwu4/305/orig 2025-08-14T21:22:30.4910089Z * [new branch] gh/ydwu4/306/base -> origin/gh/ydwu4/306/base 2025-08-14T21:22:30.4910363Z * [new branch] gh/ydwu4/306/head -> origin/gh/ydwu4/306/head 2025-08-14T21:22:30.4910526Z * [new branch] gh/ydwu4/306/orig -> origin/gh/ydwu4/306/orig 2025-08-14T21:22:30.4911004Z * [new branch] gh/ydwu4/307/base -> origin/gh/ydwu4/307/base 2025-08-14T21:22:30.4911933Z * [new branch] gh/ydwu4/307/head -> origin/gh/ydwu4/307/head 2025-08-14T21:22:30.4912831Z * [new branch] gh/ydwu4/307/orig -> origin/gh/ydwu4/307/orig 2025-08-14T21:22:30.4914226Z * [new branch] gh/ydwu4/308/base -> origin/gh/ydwu4/308/base 2025-08-14T21:22:30.4915225Z * [new branch] gh/ydwu4/308/head -> origin/gh/ydwu4/308/head 2025-08-14T21:22:30.4920086Z * [new branch] gh/ydwu4/308/orig -> origin/gh/ydwu4/308/orig 2025-08-14T21:22:30.4920314Z * [new branch] gh/ydwu4/309/base -> origin/gh/ydwu4/309/base 2025-08-14T21:22:30.4920503Z * [new branch] gh/ydwu4/309/head -> origin/gh/ydwu4/309/head 2025-08-14T21:22:30.4920659Z * [new branch] gh/ydwu4/309/orig -> origin/gh/ydwu4/309/orig 2025-08-14T21:22:30.4920812Z * [new branch] gh/ydwu4/310/base -> origin/gh/ydwu4/310/base 2025-08-14T21:22:30.4921563Z * [new branch] gh/ydwu4/310/head -> origin/gh/ydwu4/310/head 2025-08-14T21:22:30.4922683Z * [new branch] gh/ydwu4/310/orig -> origin/gh/ydwu4/310/orig 2025-08-14T21:22:30.4923846Z * [new branch] gh/ydwu4/311/base -> origin/gh/ydwu4/311/base 2025-08-14T21:22:30.4924822Z * [new branch] gh/ydwu4/311/head -> origin/gh/ydwu4/311/head 2025-08-14T21:22:30.4925733Z * [new branch] gh/ydwu4/311/orig -> origin/gh/ydwu4/311/orig 2025-08-14T21:22:30.4927274Z * [new branch] gh/yf225/133/base -> origin/gh/yf225/133/base 2025-08-14T21:22:30.4928286Z * [new branch] gh/yf225/133/head -> origin/gh/yf225/133/head 2025-08-14T21:22:30.4929613Z * [new branch] gh/yf225/171/base -> origin/gh/yf225/171/base 2025-08-14T21:22:30.4934563Z * [new branch] gh/yf225/171/head -> origin/gh/yf225/171/head 2025-08-14T21:22:30.4935982Z * [new branch] gh/yf225/171/orig -> origin/gh/yf225/171/orig 2025-08-14T21:22:30.4937347Z * [new branch] gh/yf225/172/base -> origin/gh/yf225/172/base 2025-08-14T21:22:30.4938168Z * [new branch] gh/yf225/172/head -> origin/gh/yf225/172/head 2025-08-14T21:22:30.4939122Z * [new branch] gh/yf225/172/orig -> origin/gh/yf225/172/orig 2025-08-14T21:22:30.4940358Z * [new branch] gh/yf225/93/base -> origin/gh/yf225/93/base 2025-08-14T21:22:30.4941271Z * [new branch] gh/yf225/93/head -> origin/gh/yf225/93/head 2025-08-14T21:22:30.4943271Z * [new branch] gh/yifuwang/152/base -> origin/gh/yifuwang/152/base 2025-08-14T21:22:30.4944373Z * [new branch] gh/yifuwang/152/head -> origin/gh/yifuwang/152/head 2025-08-14T21:22:30.4949510Z * [new branch] gh/yifuwang/152/orig -> origin/gh/yifuwang/152/orig 2025-08-14T21:22:30.4949677Z * [new branch] gh/yifuwang/195/base -> origin/gh/yifuwang/195/base 2025-08-14T21:22:30.4949840Z * [new branch] gh/yifuwang/195/head -> origin/gh/yifuwang/195/head 2025-08-14T21:22:30.4950014Z * [new branch] gh/yifuwang/195/orig -> origin/gh/yifuwang/195/orig 2025-08-14T21:22:30.4950712Z * [new branch] gh/yiming0416/1/base -> origin/gh/yiming0416/1/base 2025-08-14T21:22:30.4953497Z * [new branch] gh/yiming0416/1/head -> origin/gh/yiming0416/1/head 2025-08-14T21:22:30.4953728Z * [new branch] gh/yiming0416/2/base -> origin/gh/yiming0416/2/base 2025-08-14T21:22:30.4954046Z * [new branch] gh/yiming0416/2/head -> origin/gh/yiming0416/2/head 2025-08-14T21:22:30.4955276Z * [new branch] gh/ysiraichi/79/base -> origin/gh/ysiraichi/79/base 2025-08-14T21:22:30.4956182Z * [new branch] gh/ysiraichi/79/head -> origin/gh/ysiraichi/79/head 2025-08-14T21:22:30.4957268Z * [new branch] gh/ysiraichi/79/orig -> origin/gh/ysiraichi/79/orig 2025-08-14T21:22:30.4958472Z * [new branch] gh/ysiraichi/81/base -> origin/gh/ysiraichi/81/base 2025-08-14T21:22:30.4959583Z * [new branch] gh/ysiraichi/81/head -> origin/gh/ysiraichi/81/head 2025-08-14T21:22:30.4968827Z * [new branch] gh/ysiraichi/81/orig -> origin/gh/ysiraichi/81/orig 2025-08-14T21:22:30.4970045Z * [new branch] gh/ysiraichi/84/base -> origin/gh/ysiraichi/84/base 2025-08-14T21:22:30.4971110Z * [new branch] gh/ysiraichi/84/head -> origin/gh/ysiraichi/84/head 2025-08-14T21:22:30.4972068Z * [new branch] gh/ysiraichi/84/orig -> origin/gh/ysiraichi/84/orig 2025-08-14T21:22:30.4973409Z * [new branch] gh/ysiraichi/85/base -> origin/gh/ysiraichi/85/base 2025-08-14T21:22:30.4982289Z * [new branch] gh/ysiraichi/85/head -> origin/gh/ysiraichi/85/head 2025-08-14T21:22:30.4982495Z * [new branch] gh/ysiraichi/85/orig -> origin/gh/ysiraichi/85/orig 2025-08-14T21:22:30.4982707Z * [new branch] gh/ysiraichi/86/base -> origin/gh/ysiraichi/86/base 2025-08-14T21:22:30.4982915Z * [new branch] gh/ysiraichi/86/head -> origin/gh/ysiraichi/86/head 2025-08-14T21:22:30.4983120Z * [new branch] gh/ysiraichi/86/orig -> origin/gh/ysiraichi/86/orig 2025-08-14T21:22:30.4983334Z * [new branch] gh/ysiraichi/87/base -> origin/gh/ysiraichi/87/base 2025-08-14T21:22:30.4983642Z * [new branch] gh/ysiraichi/87/head -> origin/gh/ysiraichi/87/head 2025-08-14T21:22:30.4983872Z * [new branch] gh/ysiraichi/87/orig -> origin/gh/ysiraichi/87/orig 2025-08-14T21:22:30.4984077Z * [new branch] gh/ysiraichi/88/base -> origin/gh/ysiraichi/88/base 2025-08-14T21:22:30.4984559Z * [new branch] gh/ysiraichi/88/head -> origin/gh/ysiraichi/88/head 2025-08-14T21:22:30.4985595Z * [new branch] gh/ysiraichi/88/orig -> origin/gh/ysiraichi/88/orig 2025-08-14T21:22:30.4987167Z * [new branch] gh/yuguo68/1/base -> origin/gh/yuguo68/1/base 2025-08-14T21:22:30.4988083Z * [new branch] gh/yuguo68/1/head -> origin/gh/yuguo68/1/head 2025-08-14T21:22:30.4992523Z * [new branch] gh/yuguo68/1/orig -> origin/gh/yuguo68/1/orig 2025-08-14T21:22:30.4992714Z * [new branch] gh/yuguo68/2/base -> origin/gh/yuguo68/2/base 2025-08-14T21:22:30.4992934Z * [new branch] gh/yuguo68/2/head -> origin/gh/yuguo68/2/head 2025-08-14T21:22:30.4993105Z * [new branch] gh/yuguo68/2/orig -> origin/gh/yuguo68/2/orig 2025-08-14T21:22:30.4995256Z * [new branch] gh/zhxchen17/25/base -> origin/gh/zhxchen17/25/base 2025-08-14T21:22:30.4995474Z * [new branch] gh/zhxchen17/25/head -> origin/gh/zhxchen17/25/head 2025-08-14T21:22:30.4995879Z * [new branch] gh/zhxchen17/25/orig -> origin/gh/zhxchen17/25/orig 2025-08-14T21:22:30.4997378Z * [new branch] gh/zhxchen17/31/base -> origin/gh/zhxchen17/31/base 2025-08-14T21:22:30.4998350Z * [new branch] gh/zhxchen17/31/head -> origin/gh/zhxchen17/31/head 2025-08-14T21:22:30.4999778Z * [new branch] gh/zhxchen17/31/orig -> origin/gh/zhxchen17/31/orig 2025-08-14T21:22:30.5001260Z * [new branch] gh/zhxchen17/33/base -> origin/gh/zhxchen17/33/base 2025-08-14T21:22:30.5002378Z * [new branch] gh/zhxchen17/33/head -> origin/gh/zhxchen17/33/head 2025-08-14T21:22:30.5009968Z * [new branch] gh/zhxchen17/33/orig -> origin/gh/zhxchen17/33/orig 2025-08-14T21:22:30.5011231Z * [new branch] gh/zhxchen17/34/base -> origin/gh/zhxchen17/34/base 2025-08-14T21:22:30.5012198Z * [new branch] gh/zhxchen17/34/head -> origin/gh/zhxchen17/34/head 2025-08-14T21:22:30.5013325Z * [new branch] gh/zhxchen17/35/base -> origin/gh/zhxchen17/35/base 2025-08-14T21:22:30.5014167Z * [new branch] gh/zhxchen17/35/head -> origin/gh/zhxchen17/35/head 2025-08-14T21:22:30.5015252Z * [new branch] gh/zhxchen17/36/base -> origin/gh/zhxchen17/36/base 2025-08-14T21:22:30.5016165Z * [new branch] gh/zhxchen17/36/head -> origin/gh/zhxchen17/36/head 2025-08-14T21:22:30.5017259Z * [new branch] gh/zhxchen17/36/orig -> origin/gh/zhxchen17/36/orig 2025-08-14T21:22:30.5021595Z * [new branch] gh/zklaus/1/base -> origin/gh/zklaus/1/base 2025-08-14T21:22:30.5021827Z * [new branch] gh/zklaus/1/head -> origin/gh/zklaus/1/head 2025-08-14T21:22:30.5022022Z * [new branch] gh/zklaus/1/orig -> origin/gh/zklaus/1/orig 2025-08-14T21:22:30.5022184Z * [new branch] gh/zklaus/10/base -> origin/gh/zklaus/10/base 2025-08-14T21:22:30.5023070Z * [new branch] gh/zklaus/10/head -> origin/gh/zklaus/10/head 2025-08-14T21:22:30.5023971Z * [new branch] gh/zklaus/10/orig -> origin/gh/zklaus/10/orig 2025-08-14T21:22:30.5025139Z * [new branch] gh/zklaus/11/base -> origin/gh/zklaus/11/base 2025-08-14T21:22:30.5026128Z * [new branch] gh/zklaus/11/head -> origin/gh/zklaus/11/head 2025-08-14T21:22:30.5027115Z * [new branch] gh/zklaus/11/orig -> origin/gh/zklaus/11/orig 2025-08-14T21:22:30.5028290Z * [new branch] gh/zklaus/12/base -> origin/gh/zklaus/12/base 2025-08-14T21:22:30.5029290Z * [new branch] gh/zklaus/12/head -> origin/gh/zklaus/12/head 2025-08-14T21:22:30.5030158Z * [new branch] gh/zklaus/12/orig -> origin/gh/zklaus/12/orig 2025-08-14T21:22:30.5031408Z * [new branch] gh/zklaus/14/base -> origin/gh/zklaus/14/base 2025-08-14T21:22:30.5036677Z * [new branch] gh/zklaus/14/head -> origin/gh/zklaus/14/head 2025-08-14T21:22:30.5037663Z * [new branch] gh/zklaus/14/orig -> origin/gh/zklaus/14/orig 2025-08-14T21:22:30.5038965Z * [new branch] gh/zklaus/15/base -> origin/gh/zklaus/15/base 2025-08-14T21:22:30.5039896Z * [new branch] gh/zklaus/15/head -> origin/gh/zklaus/15/head 2025-08-14T21:22:30.5040909Z * [new branch] gh/zklaus/15/orig -> origin/gh/zklaus/15/orig 2025-08-14T21:22:30.5042287Z * [new branch] gh/zklaus/16/base -> origin/gh/zklaus/16/base 2025-08-14T21:22:30.5043203Z * [new branch] gh/zklaus/16/head -> origin/gh/zklaus/16/head 2025-08-14T21:22:30.5044100Z * [new branch] gh/zklaus/16/orig -> origin/gh/zklaus/16/orig 2025-08-14T21:22:30.5045326Z * [new branch] gh/zklaus/17/base -> origin/gh/zklaus/17/base 2025-08-14T21:22:30.5055053Z * [new branch] gh/zklaus/17/head -> origin/gh/zklaus/17/head 2025-08-14T21:22:30.5055262Z * [new branch] gh/zklaus/17/orig -> origin/gh/zklaus/17/orig 2025-08-14T21:22:30.5055452Z * [new branch] gh/zklaus/18/base -> origin/gh/zklaus/18/base 2025-08-14T21:22:30.5055642Z * [new branch] gh/zklaus/18/head -> origin/gh/zklaus/18/head 2025-08-14T21:22:30.5055852Z * [new branch] gh/zklaus/18/orig -> origin/gh/zklaus/18/orig 2025-08-14T21:22:30.5056406Z * [new branch] gh/zklaus/19/base -> origin/gh/zklaus/19/base 2025-08-14T21:22:30.5056565Z * [new branch] gh/zklaus/19/head -> origin/gh/zklaus/19/head 2025-08-14T21:22:30.5056728Z * [new branch] gh/zklaus/19/orig -> origin/gh/zklaus/19/orig 2025-08-14T21:22:30.5056887Z * [new branch] gh/zklaus/7/base -> origin/gh/zklaus/7/base 2025-08-14T21:22:30.5057049Z * [new branch] gh/zklaus/7/head -> origin/gh/zklaus/7/head 2025-08-14T21:22:30.5057205Z * [new branch] gh/zklaus/7/orig -> origin/gh/zklaus/7/orig 2025-08-14T21:22:30.5058380Z * [new branch] gh/zklaus/9/base -> origin/gh/zklaus/9/base 2025-08-14T21:22:30.5059349Z * [new branch] gh/zklaus/9/head -> origin/gh/zklaus/9/head 2025-08-14T21:22:30.5060245Z * [new branch] gh/zklaus/9/orig -> origin/gh/zklaus/9/orig 2025-08-14T21:22:30.5069331Z * [new branch] gh/zou3519/1175/base -> origin/gh/zou3519/1175/base 2025-08-14T21:22:30.5069544Z * [new branch] gh/zou3519/1175/head -> origin/gh/zou3519/1175/head 2025-08-14T21:22:30.5069754Z * [new branch] gh/zou3519/1175/orig -> origin/gh/zou3519/1175/orig 2025-08-14T21:22:30.5069959Z * [new branch] gh/zou3519/1177/base -> origin/gh/zou3519/1177/base 2025-08-14T21:22:30.5070287Z * [new branch] gh/zou3519/1177/head -> origin/gh/zou3519/1177/head 2025-08-14T21:22:30.5071299Z * [new branch] gh/zou3519/1177/orig -> origin/gh/zou3519/1177/orig 2025-08-14T21:22:30.5072548Z * [new branch] gh/zou3519/1187/base -> origin/gh/zou3519/1187/base 2025-08-14T21:22:30.5073447Z * [new branch] gh/zou3519/1187/head -> origin/gh/zou3519/1187/head 2025-08-14T21:22:30.5074475Z * [new branch] gh/zou3519/1187/orig -> origin/gh/zou3519/1187/orig 2025-08-14T21:22:30.5079571Z * [new branch] gh/zou3519/1188/base -> origin/gh/zou3519/1188/base 2025-08-14T21:22:30.5079766Z * [new branch] gh/zou3519/1188/head -> origin/gh/zou3519/1188/head 2025-08-14T21:22:30.5079929Z * [new branch] gh/zou3519/1188/orig -> origin/gh/zou3519/1188/orig 2025-08-14T21:22:30.5080091Z * [new branch] gh/zou3519/1189/base -> origin/gh/zou3519/1189/base 2025-08-14T21:22:30.5080280Z * [new branch] gh/zou3519/1189/head -> origin/gh/zou3519/1189/head 2025-08-14T21:22:30.5081353Z * [new branch] gh/zou3519/1189/orig -> origin/gh/zou3519/1189/orig 2025-08-14T21:22:30.5082674Z * [new branch] gh/zou3519/1190/base -> origin/gh/zou3519/1190/base 2025-08-14T21:22:30.5084007Z * [new branch] gh/zou3519/1190/head -> origin/gh/zou3519/1190/head 2025-08-14T21:22:30.5084963Z * [new branch] gh/zou3519/1190/orig -> origin/gh/zou3519/1190/orig 2025-08-14T21:22:30.5086301Z * [new branch] gh/zou3519/1191/base -> origin/gh/zou3519/1191/base 2025-08-14T21:22:30.5087359Z * [new branch] gh/zou3519/1191/head -> origin/gh/zou3519/1191/head 2025-08-14T21:22:30.5088373Z * [new branch] gh/zou3519/1191/orig -> origin/gh/zou3519/1191/orig 2025-08-14T21:22:30.5094082Z * [new branch] gh/zpcore/1/base -> origin/gh/zpcore/1/base 2025-08-14T21:22:30.5095250Z * [new branch] gh/zpcore/1/head -> origin/gh/zpcore/1/head 2025-08-14T21:22:30.5096595Z * [new branch] gh/zpcore/10/base -> origin/gh/zpcore/10/base 2025-08-14T21:22:30.5097410Z * [new branch] gh/zpcore/10/head -> origin/gh/zpcore/10/head 2025-08-14T21:22:30.5098303Z * [new branch] gh/zpcore/10/orig -> origin/gh/zpcore/10/orig 2025-08-14T21:22:30.5099657Z * [new branch] gh/zpcore/11/base -> origin/gh/zpcore/11/base 2025-08-14T21:22:30.5100618Z * [new branch] gh/zpcore/11/head -> origin/gh/zpcore/11/head 2025-08-14T21:22:30.5101542Z * [new branch] gh/zpcore/11/orig -> origin/gh/zpcore/11/orig 2025-08-14T21:22:30.5102725Z * [new branch] gh/zpcore/12/base -> origin/gh/zpcore/12/base 2025-08-14T21:22:30.5103789Z * [new branch] gh/zpcore/12/head -> origin/gh/zpcore/12/head 2025-08-14T21:22:30.5108670Z * [new branch] gh/zpcore/12/orig -> origin/gh/zpcore/12/orig 2025-08-14T21:22:30.5108893Z * [new branch] gh/zpcore/2/base -> origin/gh/zpcore/2/base 2025-08-14T21:22:30.5109060Z * [new branch] gh/zpcore/2/head -> origin/gh/zpcore/2/head 2025-08-14T21:22:30.5109213Z * [new branch] gh/zpcore/3/base -> origin/gh/zpcore/3/base 2025-08-14T21:22:30.5109373Z * [new branch] gh/zpcore/3/head -> origin/gh/zpcore/3/head 2025-08-14T21:22:30.5113150Z * [new branch] gh/zpcore/4/base -> origin/gh/zpcore/4/base 2025-08-14T21:22:30.5113360Z * [new branch] gh/zpcore/4/head -> origin/gh/zpcore/4/head 2025-08-14T21:22:30.5113557Z * [new branch] gh/zpcore/5/base -> origin/gh/zpcore/5/base 2025-08-14T21:22:30.5113711Z * [new branch] gh/zpcore/5/head -> origin/gh/zpcore/5/head 2025-08-14T21:22:30.5114603Z * [new branch] gh/zpcore/6/base -> origin/gh/zpcore/6/base 2025-08-14T21:22:30.5115500Z * [new branch] gh/zpcore/6/head -> origin/gh/zpcore/6/head 2025-08-14T21:22:30.5116532Z * [new branch] gh/zpcore/7/base -> origin/gh/zpcore/7/base 2025-08-14T21:22:30.5117463Z * [new branch] gh/zpcore/7/head -> origin/gh/zpcore/7/head 2025-08-14T21:22:30.5118705Z * [new branch] gh/zpcore/8/base -> origin/gh/zpcore/8/base 2025-08-14T21:22:30.5128185Z * [new branch] gh/zpcore/8/head -> origin/gh/zpcore/8/head 2025-08-14T21:22:30.5129817Z * [new branch] gh/zpcore/9/head -> origin/gh/zpcore/9/head 2025-08-14T21:22:30.5130837Z * [new branch] gh/zpcore/9/orig -> origin/gh/zpcore/9/orig 2025-08-14T21:22:30.5131982Z * [new branch] google-main -> origin/google-main 2025-08-14T21:22:30.5133399Z * [new branch] guangyey/external_stream -> origin/guangyey/external_stream 2025-08-14T21:22:30.5141816Z * [new branch] guangyey/host_alloc -> origin/guangyey/host_alloc 2025-08-14T21:22:30.5142031Z * [new branch] guangyey/test_2025 -> origin/guangyey/test_2025 2025-08-14T21:22:30.5142443Z * [new branch] guilhermeleobas/cherry-pick-55d87d9dfd9 -> origin/guilhermeleobas/cherry-pick-55d87d9dfd9 2025-08-14T21:22:30.5142700Z * [new branch] haozhe/bf16-dynamic-shape -> origin/haozhe/bf16-dynamic-shape 2025-08-14T21:22:30.5142897Z * [new branch] hc_baseline -> origin/hc_baseline 2025-08-14T21:22:30.5143378Z * [new branch] headeronlyScalarType -> origin/headeronlyScalarType 2025-08-14T21:22:30.5144791Z * [new branch] hf_update -> origin/hf_update 2025-08-14T21:22:30.5145833Z * [new branch] hhh_decomp_mul -> origin/hhh_decomp_mul 2025-08-14T21:22:30.5146748Z * [new branch] hhh_rand -> origin/hhh_rand 2025-08-14T21:22:30.5152385Z * [new branch] hoy/mmsplitk -> origin/hoy/mmsplitk 2025-08-14T21:22:30.5157304Z * [new branch] hoy/triton-PR3973 -> origin/hoy/triton-PR3973 2025-08-14T21:22:30.5158610Z * [new branch] hoy/triton-coalescing-baseline -> origin/hoy/triton-coalescing-baseline 2025-08-14T21:22:30.5159471Z * [new branch] hoy/triton-coalescing-min -> origin/hoy/triton-coalescing-min 2025-08-14T21:22:30.5160390Z * [new branch] hoy/triton-coalescing-new -> origin/hoy/triton-coalescing-new 2025-08-14T21:22:30.5161670Z * [new branch] hoy/triton-coalescing-vec -> origin/hoy/triton-coalescing-vec 2025-08-14T21:22:30.5167032Z * [new branch] inductordecompfix -> origin/inductordecompfix 2025-08-14T21:22:30.5168131Z * [new branch] inline -> origin/inline 2025-08-14T21:22:30.5169108Z * [new branch] inlining -> origin/inlining 2025-08-14T21:22:30.5170164Z * [new branch] inlining-ezyang -> origin/inlining-ezyang 2025-08-14T21:22:30.5171198Z * [new branch] int8_sdpa -> origin/int8_sdpa 2025-08-14T21:22:30.5172246Z * [new branch] invoke-subgraph -> origin/invoke-subgraph 2025-08-14T21:22:30.5173279Z * [new branch] issue#58739 -> origin/issue#58739 2025-08-14T21:22:30.5174581Z * [new branch] issue-154849 -> origin/issue-154849 2025-08-14T21:22:30.5175952Z * [new branch] ivanov/cherry-pick-ckpt-fixes -> origin/ivanov/cherry-pick-ckpt-fixes 2025-08-14T21:22:30.5185340Z * [new branch] jcaip/test-cusparselt-version-0.6.2 -> origin/jcaip/test-cusparselt-version-0.6.2 2025-08-14T21:22:30.5185645Z * [new branch] jcaip/update-cusparselt-0.6.2 -> origin/jcaip/update-cusparselt-0.6.2 2025-08-14T21:22:30.5185896Z * [new branch] jithunnair-amd-patch-1 -> origin/jithunnair-amd-patch-1 2025-08-14T21:22:30.5186197Z * [new branch] justinchu/attention-tests -> origin/justinchu/attention-tests 2025-08-14T21:22:30.5186428Z * [new branch] justinchu/native-qdq -> origin/justinchu/native-qdq 2025-08-14T21:22:30.5186772Z * [new branch] justinchuby/JitScalarType -> origin/justinchuby/JitScalarType 2025-08-14T21:22:30.5187017Z * [new branch] justinchuby/dynamo-true -> origin/justinchuby/dynamo-true 2025-08-14T21:22:30.5187255Z * [new branch] justinchuby/opset-20 -> origin/justinchuby/opset-20 2025-08-14T21:22:30.5187510Z * [new branch] kainan666/xlf_debug -> origin/kainan666/xlf_debug 2025-08-14T21:22:30.5187695Z * [new branch] kainan_test -> origin/kainan_test 2025-08-14T21:22:30.5189144Z * [new branch] leslie/enable_poc_reduction_fusion -> origin/leslie/enable_poc_reduction_fusion 2025-08-14T21:22:30.5190053Z * [new branch] leslie/test_group_gemm_epilogues -> origin/leslie/test_group_gemm_epilogues 2025-08-14T21:22:30.5191464Z * [new branch] lessw2020/fix_cutlass_cache_error -> origin/lessw2020/fix_cutlass_cache_error 2025-08-14T21:22:30.5199946Z * [new branch] liaoxuan/shm_all_reduce -> origin/liaoxuan/shm_all_reduce 2025-08-14T21:22:30.5200174Z * [new branch] liaoxuan/tags_issue -> origin/liaoxuan/tags_issue 2025-08-14T21:22:30.5200456Z * [new branch] liaoxuan/test_fa_disable_softmax -> origin/liaoxuan/test_fa_disable_softmax 2025-08-14T21:22:30.5200737Z * [new branch] liaoxuan/test_int8_sdpa -> origin/liaoxuan/test_int8_sdpa 2025-08-14T21:22:30.5200944Z * [new branch] lintbuilddocker -> origin/lintbuilddocker 2025-08-14T21:22:30.5201693Z * [new branch] llama4-stable -> origin/llama4-stable 2025-08-14T21:22:30.5202882Z * [new branch] logdetfix -> origin/logdetfix 2025-08-14T21:22:30.5204529Z * [new branch] lts/release/1.8 -> origin/lts/release/1.8 2025-08-14T21:22:30.5210058Z * [new branch] lucaskabela/#94773 -> origin/lucaskabela/#94773 2025-08-14T21:22:30.5210328Z * [new branch] lucaskabela/fix_157452 -> origin/lucaskabela/fix_157452 2025-08-14T21:22:30.5210704Z * [new branch] lucaskabela/fix_circular_import_158120 -> origin/lucaskabela/fix_circular_import_158120 2025-08-14T21:22:30.5210920Z * [new branch] lucaskabela/func_under_decomp -> origin/lucaskabela/func_under_decomp 2025-08-14T21:22:30.5211145Z * [new branch] lucaskabela/functional_in_dynamo -> origin/lucaskabela/functional_in_dynamo 2025-08-14T21:22:30.5211429Z * [new branch] lucaskabela/install_params_as_graph_attr -> origin/lucaskabela/install_params_as_graph_attr 2025-08-14T21:22:30.5212320Z * [new branch] lucaskabela/issue_120648 -> origin/lucaskabela/issue_120648 2025-08-14T21:22:30.5213574Z * [new branch] lucaskabela/parameters_as_graph_attr -> origin/lucaskabela/parameters_as_graph_attr 2025-08-14T21:22:30.5214482Z * [new branch] lucaskabela/registry_fix -> origin/lucaskabela/registry_fix 2025-08-14T21:22:30.5215501Z * [new branch] lucaskabela/remove_aot_dispatcher_metadata -> origin/lucaskabela/remove_aot_dispatcher_metadata 2025-08-14T21:22:30.5216374Z * [new branch] lucaskabela/type_guards -> origin/lucaskabela/type_guards 2025-08-14T21:22:30.5217341Z * [new branch] lucaskabela/typing-misc -> origin/lucaskabela/typing-misc 2025-08-14T21:22:30.5218284Z * [new branch] lucaskabela/typing_backends -> origin/lucaskabela/typing_backends 2025-08-14T21:22:30.5219304Z * [new branch] lucaskabela/typing_bytecode_analysis_transform -> origin/lucaskabela/typing_bytecode_analysis_transform 2025-08-14T21:22:30.5220086Z * [new branch] lucaskabela/typing_cache_files -> origin/lucaskabela/typing_cache_files 2025-08-14T21:22:30.5226769Z * [new branch] lucaskabela/typing_compile_autograd -> origin/lucaskabela/typing_compile_autograd 2025-08-14T21:22:30.5227080Z * [new branch] lucaskabela/typing_debug_utils.py -> origin/lucaskabela/typing_debug_utils.py 2025-08-14T21:22:30.5227339Z * [new branch] lucaskabela/typing_decorators -> origin/lucaskabela/typing_decorators 2025-08-14T21:22:30.5228323Z * [new branch] lucaskabela/typing_eval_frame -> origin/lucaskabela/typing_eval_frame 2025-08-14T21:22:30.5229225Z * [new branch] lucaskabela/typing_for_codegen -> origin/lucaskabela/typing_for_codegen 2025-08-14T21:22:30.5230133Z * [new branch] lucaskabela/typing_output_graph -> origin/lucaskabela/typing_output_graph 2025-08-14T21:22:30.5231149Z * [new branch] lucaskabela/typing_side_effects -> origin/lucaskabela/typing_side_effects 2025-08-14T21:22:30.5232192Z * [new branch] lucaskabela/typing_source_guard -> origin/lucaskabela/typing_source_guard 2025-08-14T21:22:30.5233089Z * [new branch] lucaskabela/typing_trace_rules -> origin/lucaskabela/typing_trace_rules 2025-08-14T21:22:30.5233980Z * [new branch] lucaskabela/typing_utils.py -> origin/lucaskabela/typing_utils.py 2025-08-14T21:22:30.5241549Z * [new branch] lucaskabela/typing_utils_improvements -> origin/lucaskabela/typing_utils_improvements 2025-08-14T21:22:30.5241723Z * [new branch] main -> origin/main 2025-08-14T21:22:30.5242057Z * [new branch] main-enable-b200-distributed-tests -> origin/main-enable-b200-distributed-tests 2025-08-14T21:22:30.5242275Z * [new branch] malfet-patch-1 -> origin/malfet-patch-1 2025-08-14T21:22:30.5242483Z * [new branch] malfet-patch-10 -> origin/malfet-patch-10 2025-08-14T21:22:30.5242680Z * [new branch] malfet-patch-11 -> origin/malfet-patch-11 2025-08-14T21:22:30.5248366Z * [new branch] malfet-patch-13 -> origin/malfet-patch-13 2025-08-14T21:22:30.5248638Z * [new branch] malfet-patch-14 -> origin/malfet-patch-14 2025-08-14T21:22:30.5253436Z * [new branch] malfet-patch-2 -> origin/malfet-patch-2 2025-08-14T21:22:30.5253823Z * [new branch] malfet-patch-3 -> origin/malfet-patch-3 2025-08-14T21:22:30.5253977Z * [new branch] malfet-patch-4 -> origin/malfet-patch-4 2025-08-14T21:22:30.5254125Z * [new branch] malfet-patch-5 -> origin/malfet-patch-5 2025-08-14T21:22:30.5254285Z * [new branch] malfet-patch-6 -> origin/malfet-patch-6 2025-08-14T21:22:30.5254443Z * [new branch] malfet-patch-7 -> origin/malfet-patch-7 2025-08-14T21:22:30.5254744Z * [new branch] malfet-patch-8 -> origin/malfet-patch-8 2025-08-14T21:22:30.5255755Z * [new branch] malfet-patch-9 -> origin/malfet-patch-9 2025-08-14T21:22:30.5257100Z * [new branch] malfet/delete-upsteam-cuda -> origin/malfet/delete-upsteam-cuda 2025-08-14T21:22:30.5258050Z * [new branch] malfet/mps-implement-col2im -> origin/malfet/mps-implement-col2im 2025-08-14T21:22:30.5259310Z * [new branch] manuel/fix_multidim_boolean_indexing -> origin/manuel/fix_multidim_boolean_indexing 2025-08-14T21:22:30.5260093Z * [new branch] manuel/np_empty_ellipsis -> origin/manuel/np_empty_ellipsis 2025-08-14T21:22:30.5261049Z * [new branch] manuel/test-ops-common-allow-mps -> origin/manuel/test-ops-common-allow-mps 2025-08-14T21:22:30.5261979Z * [new branch] metascroy-patch-1 -> origin/metascroy-patch-1 2025-08-14T21:22:30.5263273Z * [new branch] mlazos/S429861-debug -> origin/mlazos/S429861-debug 2025-08-14T21:22:30.5272210Z * [new branch] mlazos/aa -> origin/mlazos/aa 2025-08-14T21:22:30.5272458Z * [new branch] mlazos/arg-renames -> origin/mlazos/arg-renames 2025-08-14T21:22:30.5272826Z * [new branch] mlazos/backup-test-branch -> origin/mlazos/backup-test-branch 2025-08-14T21:22:30.5273071Z * [new branch] mlazos/bad-cudagraphs -> origin/mlazos/bad-cudagraphs 2025-08-14T21:22:30.5273274Z * [new branch] mlazos/baseline -> origin/mlazos/baseline 2025-08-14T21:22:30.5273548Z * [new branch] mlazos/baseline-graph-breaks -> origin/mlazos/baseline-graph-breaks 2025-08-14T21:22:30.5273768Z * [new branch] mlazos/beta-tensor -> origin/mlazos/beta-tensor 2025-08-14T21:22:30.5273970Z * [new branch] mlazos/buffers -> origin/mlazos/buffers 2025-08-14T21:22:30.5274179Z * [new branch] mlazos/buffers2 -> origin/mlazos/buffers2 2025-08-14T21:22:30.5274334Z * [new branch] mlazos/buffers3 -> origin/mlazos/buffers3 2025-08-14T21:22:30.5274484Z * [new branch] mlazos/ck2 -> origin/mlazos/ck2 2025-08-14T21:22:30.5275418Z * [new branch] mlazos/combokernels -> origin/mlazos/combokernels 2025-08-14T21:22:30.5276314Z * [new branch] mlazos/ctx-cleanup -> origin/mlazos/ctx-cleanup 2025-08-14T21:22:30.5277673Z * [new branch] mlazos/cudagraph-tests -> origin/mlazos/cudagraph-tests 2025-08-14T21:22:30.5282615Z * [new branch] mlazos/cudagraphs-measurement -> origin/mlazos/cudagraphs-measurement 2025-08-14T21:22:30.5288327Z * [new branch] mlazos/cutlass-test -> origin/mlazos/cutlass-test 2025-08-14T21:22:30.5289266Z * [new branch] mlazos/cutlass-topo-bug -> origin/mlazos/cutlass-topo-bug 2025-08-14T21:22:30.5290233Z * [new branch] mlazos/data-gather -> origin/mlazos/data-gather 2025-08-14T21:22:30.5291169Z * [new branch] mlazos/data-ptrs2 -> origin/mlazos/data-ptrs2 2025-08-14T21:22:30.5292051Z * [new branch] mlazos/data-ptrs3 -> origin/mlazos/data-ptrs3 2025-08-14T21:22:30.5297202Z * [new branch] mlazos/dataclass-proxy -> origin/mlazos/dataclass-proxy 2025-08-14T21:22:30.5297463Z * [new branch] mlazos/dc-attrs -> origin/mlazos/dc-attrs 2025-08-14T21:22:30.5297642Z * [new branch] mlazos/dc-helion -> origin/mlazos/dc-helion 2025-08-14T21:22:30.5297856Z * [new branch] mlazos/dict-fix -> origin/mlazos/dict-fix 2025-08-14T21:22:30.5301721Z * [new branch] mlazos/disable-closures -> origin/mlazos/disable-closures 2025-08-14T21:22:30.5301937Z * [new branch] mlazos/disable-tf -> origin/mlazos/disable-tf 2025-08-14T21:22:30.5302130Z * [new branch] mlazos/dupe-fix -> origin/mlazos/dupe-fix 2025-08-14T21:22:30.5302292Z * [new branch] mlazos/dyn-batch -> origin/mlazos/dyn-batch 2025-08-14T21:22:30.5302645Z * [new branch] mlazos/evt -> origin/mlazos/evt 2025-08-14T21:22:30.5303694Z * [new branch] mlazos/exp_disable -> origin/mlazos/exp_disable 2025-08-14T21:22:30.5304672Z * [new branch] mlazos/extract-examples -> origin/mlazos/extract-examples 2025-08-14T21:22:30.5305563Z * [new branch] mlazos/foreach-op -> origin/mlazos/foreach-op 2025-08-14T21:22:30.5306486Z * [new branch] mlazos/fp8 -> origin/mlazos/fp8 2025-08-14T21:22:30.5313760Z * [new branch] mlazos/fp8-bias -> origin/mlazos/fp8-bias 2025-08-14T21:22:30.5313997Z * [new branch] mlazos/fp8-bias-fusion -> origin/mlazos/fp8-bias-fusion 2025-08-14T21:22:30.5314194Z * [new branch] mlazos/freezing -> origin/mlazos/freezing 2025-08-14T21:22:30.5314398Z * [new branch] mlazos/h-comp -> origin/mlazos/h-comp 2025-08-14T21:22:30.5314590Z * [new branch] mlazos/h-comp2 -> origin/mlazos/h-comp2 2025-08-14T21:22:30.5314875Z * [new branch] mlazos/hash-hop -> origin/mlazos/hash-hop 2025-08-14T21:22:30.5315059Z * [new branch] mlazos/hc -> origin/mlazos/hc 2025-08-14T21:22:30.5315296Z * [new branch] mlazos/hc-cycles -> origin/mlazos/hc-cycles 2025-08-14T21:22:30.5315512Z * [new branch] mlazos/hc-fixes -> origin/mlazos/hc-fixes 2025-08-14T21:22:30.5316844Z * [new branch] mlazos/hc-fixes3 -> origin/mlazos/hc-fixes3 2025-08-14T21:22:30.5317806Z * [new branch] mlazos/hc-fixes4 -> origin/mlazos/hc-fixes4 2025-08-14T21:22:30.5318739Z * [new branch] mlazos/hc-hf -> origin/mlazos/hc-hf 2025-08-14T21:22:30.5319590Z * [new branch] mlazos/hc-mut -> origin/mlazos/hc-mut 2025-08-14T21:22:30.5320487Z * [new branch] mlazos/hc10 -> origin/mlazos/hc10 2025-08-14T21:22:30.5321553Z * [new branch] mlazos/hc11 -> origin/mlazos/hc11 2025-08-14T21:22:30.5326874Z * [new branch] mlazos/hc12 -> origin/mlazos/hc12 2025-08-14T21:22:30.5327792Z * [new branch] mlazos/hc13 -> origin/mlazos/hc13 2025-08-14T21:22:30.5328807Z * [new branch] mlazos/hc14 -> origin/mlazos/hc14 2025-08-14T21:22:30.5329742Z * [new branch] mlazos/hc15 -> origin/mlazos/hc15 2025-08-14T21:22:30.5330711Z * [new branch] mlazos/hc2 -> origin/mlazos/hc2 2025-08-14T21:22:30.5331584Z * [new branch] mlazos/hc4 -> origin/mlazos/hc4 2025-08-14T21:22:30.5332509Z * [new branch] mlazos/hc5 -> origin/mlazos/hc5 2025-08-14T21:22:30.5333419Z * [new branch] mlazos/hc6 -> origin/mlazos/hc6 2025-08-14T21:22:30.5334466Z * [new branch] mlazos/hc7 -> origin/mlazos/hc7 2025-08-14T21:22:30.5335301Z * [new branch] mlazos/hc8 -> origin/mlazos/hc8 2025-08-14T21:22:30.5336234Z * [new branch] mlazos/hc9 -> origin/mlazos/hc9 2025-08-14T21:22:30.5340811Z * [new branch] mlazos/hc_baseline2 -> origin/mlazos/hc_baseline2 2025-08-14T21:22:30.5340985Z * [new branch] mlazos/hop-modes -> origin/mlazos/hop-modes 2025-08-14T21:22:30.5341173Z * [new branch] mlazos/init-per-param -> origin/mlazos/init-per-param 2025-08-14T21:22:30.5341359Z * [new branch] mlazos/init_per_param -> origin/mlazos/init_per_param 2025-08-14T21:22:30.5341533Z * [new branch] mlazos/less-guards -> origin/mlazos/less-guards 2025-08-14T21:22:30.5342187Z * [new branch] mlazos/lr-composibility -> origin/mlazos/lr-composibility 2025-08-14T21:22:30.5344862Z * [new branch] mlazos/main -> origin/mlazos/main 2025-08-14T21:22:30.5345149Z * [new branch] mlazos/main-test-enablement -> origin/mlazos/main-test-enablement 2025-08-14T21:22:30.5345339Z * [new branch] mlazos/main2 -> origin/mlazos/main2 2025-08-14T21:22:30.5345756Z * [new branch] mlazos/mcg -> origin/mlazos/mcg 2025-08-14T21:22:30.5346723Z * [new branch] mlazos/mcg2 -> origin/mlazos/mcg2 2025-08-14T21:22:30.5347671Z * [new branch] mlazos/meta-guards -> origin/mlazos/meta-guards 2025-08-14T21:22:30.5349380Z * [new branch] mlazos/mlazos/ck2 -> origin/mlazos/mlazos/ck2 2025-08-14T21:22:30.5350468Z * [new branch] mlazos/mlazos/foreach-map-adam -> origin/mlazos/mlazos/foreach-map-adam 2025-08-14T21:22:30.5359362Z * [new branch] mlazos/mlazos/tf-mode-backup -> origin/mlazos/mlazos/tf-mode-backup 2025-08-14T21:22:30.5359565Z * [new branch] mlazos/mod-fix -> origin/mlazos/mod-fix 2025-08-14T21:22:30.5359898Z * [new branch] mlazos/mode-fix -> origin/mlazos/mode-fix 2025-08-14T21:22:30.5360113Z * [new branch] mlazos/more-tests -> origin/mlazos/more-tests 2025-08-14T21:22:30.5360327Z * [new branch] mlazos/nested-dc -> origin/mlazos/nested-dc 2025-08-14T21:22:30.5360545Z * [new branch] mlazos/no-cpp -> origin/mlazos/no-cpp 2025-08-14T21:22:30.5361766Z * [new branch] mlazos/no-init-group-handling -> origin/mlazos/no-init-group-handling 2025-08-14T21:22:30.5362483Z * [new branch] mlazos/offsets -> origin/mlazos/offsets 2025-08-14T21:22:30.5363426Z * [new branch] mlazos/opt-bench-exp2 -> origin/mlazos/opt-bench-exp2 2025-08-14T21:22:30.5364329Z * [new branch] mlazos/opt-incr -> origin/mlazos/opt-incr 2025-08-14T21:22:30.5365326Z * [new branch] mlazos/proxy-ctors -> origin/mlazos/proxy-ctors 2025-08-14T21:22:30.5369905Z * [new branch] mlazos/proxy-opt -> origin/mlazos/proxy-opt 2025-08-14T21:22:30.5370114Z * [new branch] mlazos/quant-fix -> origin/mlazos/quant-fix 2025-08-14T21:22:30.5370346Z * [new branch] mlazos/rm-buf-names -> origin/mlazos/rm-buf-names 2025-08-14T21:22:30.5370507Z * [new branch] mlazos/rm-spam -> origin/mlazos/rm-spam 2025-08-14T21:22:30.5370662Z * [new branch] mlazos/rtp -> origin/mlazos/rtp 2025-08-14T21:22:30.5371355Z * [new branch] mlazos/static-idx-dbg -> origin/mlazos/static-idx-dbg 2025-08-14T21:22:30.5372440Z * [new branch] mlazos/static-inputs-log -> origin/mlazos/static-inputs-log 2025-08-14T21:22:30.5373371Z * [new branch] mlazos/sub-param-fix -> origin/mlazos/sub-param-fix 2025-08-14T21:22:30.5374237Z * [new branch] mlazos/td-fix2 -> origin/mlazos/td-fix2 2025-08-14T21:22:30.5375189Z * [new branch] mlazos/tensor-hasattr2 -> origin/mlazos/tensor-hasattr2 2025-08-14T21:22:30.5376048Z * [new branch] mlazos/test -> origin/mlazos/test 2025-08-14T21:22:30.5376968Z * [new branch] mlazos/tf-mode -> origin/mlazos/tf-mode 2025-08-14T21:22:30.5377931Z * [new branch] mlazos/tf-mode-backup2 -> origin/mlazos/tf-mode-backup2 2025-08-14T21:22:30.5378895Z * [new branch] mlazos/tf-mode-reland -> origin/mlazos/tf-mode-reland 2025-08-14T21:22:30.5380089Z * [new branch] mlazos/tf-mode-reland2 -> origin/mlazos/tf-mode-reland2 2025-08-14T21:22:30.5385370Z * [new branch] mlazos/tf-mode-reland3 -> origin/mlazos/tf-mode-reland3 2025-08-14T21:22:30.5386200Z * [new branch] mlazos/topo-fix -> origin/mlazos/topo-fix 2025-08-14T21:22:30.5387174Z * [new branch] mlazos/triton-no-epi -> origin/mlazos/triton-no-epi 2025-08-14T21:22:30.5388453Z * [new branch] mlazos/tune-proto -> origin/mlazos/tune-proto 2025-08-14T21:22:30.5389428Z * [new branch] mlazos/tuple-fixes -> origin/mlazos/tuple-fixes 2025-08-14T21:22:30.5390349Z * [new branch] mlazos/tuple-fixes2 -> origin/mlazos/tuple-fixes2 2025-08-14T21:22:30.5391331Z * [new branch] mlazos/tuple-handling -> origin/mlazos/tuple-handling 2025-08-14T21:22:30.5392302Z * [new branch] mlazos/user-streams -> origin/mlazos/user-streams 2025-08-14T21:22:30.5393219Z * [new branch] mlazos/vary-beta -> origin/mlazos/vary-beta 2025-08-14T21:22:30.5394277Z * [new branch] mlazos/vary-beta2 -> origin/mlazos/vary-beta2 2025-08-14T21:22:30.5403065Z * [new branch] mlazos/weird-perf1 -> origin/mlazos/weird-perf1 2025-08-14T21:22:30.5403418Z * [new branch] mm_out_dtype_compile -> origin/mm_out_dtype_compile 2025-08-14T21:22:30.5403614Z * [new branch] modify-setupvllm -> origin/modify-setupvllm 2025-08-14T21:22:30.5403797Z * [new branch] move-theme-out-docker -> origin/move-theme-out-docker 2025-08-14T21:22:30.5403961Z * [new branch] mps-linear-1d -> origin/mps-linear-1d 2025-08-14T21:22:30.5404118Z * [new branch] msaroufim/be1 -> origin/msaroufim/be1 2025-08-14T21:22:30.5404295Z * [new branch] msaroufim/cn_path -> origin/msaroufim/cn_path 2025-08-14T21:22:30.5404506Z * [new branch] msaroufim/dtensorfusedadam -> origin/msaroufim/dtensorfusedadam 2025-08-14T21:22:30.5404668Z * [new branch] msaroufim/reduce -> origin/msaroufim/reduce 2025-08-14T21:22:30.5405582Z * [new branch] mtia/basic-cmake -> origin/mtia/basic-cmake 2025-08-14T21:22:30.5406603Z * [new branch] muon_dev -> origin/muon_dev 2025-08-14T21:22:30.5407644Z * [new branch] new-modifiy-setupvllm -> origin/new-modifiy-setupvllm 2025-08-14T21:22:30.5408601Z * [new branch] new-setupvllm -> origin/new-setupvllm 2025-08-14T21:22:30.5414071Z * [new branch] newtest-base -> origin/newtest-base 2025-08-14T21:22:30.5415276Z * [new branch] ngimel/cat_perf -> origin/ngimel/cat_perf 2025-08-14T21:22:30.5416234Z * [new branch] ngimel/cudamoduleload -> origin/ngimel/cudamoduleload 2025-08-14T21:22:30.5417116Z * [new branch] ngimel/fabric_driver_version -> origin/ngimel/fabric_driver_version 2025-08-14T21:22:30.5417929Z * [new branch] ngimel/fabric_symm -> origin/ngimel/fabric_symm 2025-08-14T21:22:30.5418797Z * [new branch] ngimel/gg_new -> origin/ngimel/gg_new 2025-08-14T21:22:30.5419674Z * [new branch] ngimel/grouped_mm_checks -> origin/ngimel/grouped_mm_checks 2025-08-14T21:22:30.5420454Z * [new branch] ngimel/guardfabric -> origin/ngimel/guardfabric 2025-08-14T21:22:30.5421338Z * [new branch] ngimel/index_None -> origin/ngimel/index_None 2025-08-14T21:22:30.5422445Z * [new branch] ngimel/modeguard -> origin/ngimel/modeguard 2025-08-14T21:22:30.5431851Z * [new branch] ngimel/multicast_fix -> origin/ngimel/multicast_fix 2025-08-14T21:22:30.5432078Z * [new branch] ngimel/unbind_multimem -> origin/ngimel/unbind_multimem 2025-08-14T21:22:30.5432272Z * [new branch] nightly -> origin/nightly 2025-08-14T21:22:30.5432494Z * [new branch] nmacchioni-patch-10 -> origin/nmacchioni-patch-10 2025-08-14T21:22:30.5432715Z * [new branch] nmacchioni-patch-7 -> origin/nmacchioni-patch-7 2025-08-14T21:22:30.5432946Z * [new branch] nmacchioni-patch-8 -> origin/nmacchioni-patch-8 2025-08-14T21:22:30.5433161Z * [new branch] nmacchioni-patch-9 -> origin/nmacchioni-patch-9 2025-08-14T21:22:30.5433346Z * [new branch] nullplay_fuse_matmul -> origin/nullplay_fuse_matmul 2025-08-14T21:22:30.5433641Z * [new branch] nweidia/enable-B200-inductor-nightly-ci -> origin/nweidia/enable-B200-inductor-nightly-ci 2025-08-14T21:22:30.5433782Z * [new branch] one-off -> origin/one-off 2025-08-14T21:22:30.5435403Z * [new branch] orig/release/1.10 -> origin/orig/release/1.10 2025-08-14T21:22:30.5436384Z * [new branch] orig/release/1.11 -> origin/orig/release/1.11 2025-08-14T21:22:30.5437407Z * [new branch] orig/release/1.12 -> origin/orig/release/1.12 2025-08-14T21:22:30.5447263Z * [new branch] orig/release/1.13 -> origin/orig/release/1.13 2025-08-14T21:22:30.5448408Z * [new branch] orig/release/1.6 -> origin/orig/release/1.6 2025-08-14T21:22:30.5449949Z * [new branch] orig/release/1.7 -> origin/orig/release/1.7 2025-08-14T21:22:30.5450934Z * [new branch] orig/release/1.8 -> origin/orig/release/1.8 2025-08-14T21:22:30.5451931Z * [new branch] orig/release/1.9 -> origin/orig/release/1.9 2025-08-14T21:22:30.5456636Z * [new branch] orig/release/2.0 -> origin/orig/release/2.0 2025-08-14T21:22:30.5456802Z * [new branch] orig/release/2.1 -> origin/orig/release/2.1 2025-08-14T21:22:30.5457217Z * [new branch] orig/release/2.2 -> origin/orig/release/2.2 2025-08-14T21:22:30.5461020Z * [new branch] orig/release/2.3 -> origin/orig/release/2.3 2025-08-14T21:22:30.5461231Z * [new branch] orig/release/2.4 -> origin/orig/release/2.4 2025-08-14T21:22:30.5461447Z * [new branch] orig/release/2.5 -> origin/orig/release/2.5 2025-08-14T21:22:30.5461607Z * [new branch] orig/release/2.6 -> origin/orig/release/2.6 2025-08-14T21:22:30.5461950Z * [new branch] orig/release/2.7 -> origin/orig/release/2.7 2025-08-14T21:22:30.5463318Z * [new branch] orig/release/2.8 -> origin/orig/release/2.8 2025-08-14T21:22:30.5464652Z * [new branch] oulgen/fx_graph -> origin/oulgen/fx_graph 2025-08-14T21:22:30.5465989Z * [new branch] padded-tensor -> origin/padded-tensor 2025-08-14T21:22:30.5471058Z * [new branch] parallel_cat -> origin/parallel_cat 2025-08-14T21:22:30.5471253Z * [new branch] pca2 -> origin/pca2 2025-08-14T21:22:30.5471426Z * [new branch] pianpwk-patch-1 -> origin/pianpwk-patch-1 2025-08-14T21:22:30.5471685Z * [new branch] pianpwk/backed_size_oblivious_export -> origin/pianpwk/backed_size_oblivious_export 2025-08-14T21:22:30.5471996Z * [new branch] pianpwk/dde_repeat_cat -> origin/pianpwk/dde_repeat_cat 2025-08-14T21:22:30.5475784Z * [new branch] pianpwk/draft_export_normalize -> origin/pianpwk/draft_export_normalize 2025-08-14T21:22:30.5476060Z * [new branch] pianpwk/dynamic_source_dim -> origin/pianpwk/dynamic_source_dim 2025-08-14T21:22:30.5476281Z * [new branch] pianpwk/invalidate_fake_memo -> origin/pianpwk/invalidate_fake_memo 2025-08-14T21:22:30.5476485Z * [new branch] pianpwk/lru_cache_bound_sympy -> origin/pianpwk/lru_cache_bound_sympy 2025-08-14T21:22:30.5477392Z * [new branch] pianpwk/max_1_strides -> origin/pianpwk/max_1_strides 2025-08-14T21:22:30.5478244Z * [new branch] pianpwk/nonzero_memo -> origin/pianpwk/nonzero_memo 2025-08-14T21:22:30.5479242Z * [new branch] pianpwk/oblivious_reshape_view_better -> origin/pianpwk/oblivious_reshape_view_better 2025-08-14T21:22:30.5480109Z * [new branch] pianpwk/oblivious_should_swap -> origin/pianpwk/oblivious_should_swap 2025-08-14T21:22:30.5481118Z * [new branch] pianpwk/oblivious_slice_forward -> origin/pianpwk/oblivious_slice_forward 2025-08-14T21:22:30.5486547Z * [new branch] pianpwk/oblivious_where -> origin/pianpwk/oblivious_where 2025-08-14T21:22:30.5487507Z * [new branch] pianpwk/param_static_pgo -> origin/pianpwk/param_static_pgo 2025-08-14T21:22:30.5488393Z * [new branch] pianpwk/pre_forward_hook -> origin/pianpwk/pre_forward_hook 2025-08-14T21:22:30.5489432Z * [new branch] pianpwk/remove_guard_fail_break -> origin/pianpwk/remove_guard_fail_break 2025-08-14T21:22:30.5490300Z * [new branch] pianpwk/slice_fresh_symbols -> origin/pianpwk/slice_fresh_symbols 2025-08-14T21:22:30.5491285Z * [new branch] pianpwk/sym_sym -> origin/pianpwk/sym_sym 2025-08-14T21:22:30.5492071Z * [new branch] pianpwk/test_slice_fake_impl -> origin/pianpwk/test_slice_fake_impl 2025-08-14T21:22:30.5493027Z * [new branch] pianpwk/unbacked_channels_last -> origin/pianpwk/unbacked_channels_last 2025-08-14T21:22:30.5494026Z * [new branch] pianpwk/unbacked_safe_conv1d -> origin/pianpwk/unbacked_safe_conv1d 2025-08-14T21:22:30.5494928Z * [new branch] pianpwk/unbacked_sdpa_flash -> origin/pianpwk/unbacked_sdpa_flash 2025-08-14T21:22:30.5500134Z * [new branch] pianpwk/unbacked_should_swap -> origin/pianpwk/unbacked_should_swap 2025-08-14T21:22:30.5500357Z * [new branch] pianpwk/unbacked_should_swap_2 -> origin/pianpwk/unbacked_should_swap_2 2025-08-14T21:22:30.5500573Z * [new branch] pianpwk/unbacked_slice_binding -> origin/pianpwk/unbacked_slice_binding 2025-08-14T21:22:30.5500791Z * [new branch] pianpwk/unbacked_slice_forward -> origin/pianpwk/unbacked_slice_forward 2025-08-14T21:22:30.5501000Z * [new branch] pianpwk/verbose_tensor_guards -> origin/pianpwk/verbose_tensor_guards 2025-08-14T21:22:30.5501187Z * [new branch] pianpwk/wan21_reshape -> origin/pianpwk/wan21_reshape 2025-08-14T21:22:30.5504533Z * [new branch] pianpwk/whitelist_optimizer -> origin/pianpwk/whitelist_optimizer 2025-08-14T21:22:30.5504822Z * [new branch] pin-torchao -> origin/pin-torchao 2025-08-14T21:22:30.5505170Z * [new branch] piz/fall_back_missing_0705 -> origin/piz/fall_back_missing_0705 2025-08-14T21:22:30.5505526Z * [new branch] piz/fall_back_missing_0716 -> origin/piz/fall_back_missing_0716 2025-08-14T21:22:30.5506272Z * [new branch] piz/fill_dist_cost_0702-3 -> origin/piz/fill_dist_cost_0702-3 2025-08-14T21:22:30.5507143Z * [new branch] piz/fill_dist_cost_0702-4 -> origin/piz/fill_dist_cost_0702-4 2025-08-14T21:22:30.5507981Z * [new branch] piz/fill_dist_cost_0702-5 -> origin/piz/fill_dist_cost_0702-5 2025-08-14T21:22:30.5508819Z * [new branch] piz/fix_sort_ -> origin/piz/fix_sort_ 2025-08-14T21:22:30.5510057Z * [new branch] piz/improve_scatter_0808 -> origin/piz/improve_scatter_0808 2025-08-14T21:22:30.5518875Z * [new branch] pool-separate -> origin/pool-separate 2025-08-14T21:22:30.5519168Z * [new branch] pr-156087 -> origin/pr-156087 2025-08-14T21:22:30.5519451Z * [new branch] pr/131860 -> origin/pr/131860 2025-08-14T21:22:30.5519763Z * [new branch] predispatch_to -> origin/predispatch_to 2025-08-14T21:22:30.5520081Z * [new branch] pt-opt-cuda3 -> origin/pt-opt-cuda3 2025-08-14T21:22:30.5520978Z * [new branch] pt2e-cache-model-device -> origin/pt2e-cache-model-device 2025-08-14T21:22:30.5522054Z * [new branch] pull-latest-theme -> origin/pull-latest-theme 2025-08-14T21:22:30.5522987Z * [new branch] pyobjectslot -> origin/pyobjectslot 2025-08-14T21:22:30.5524545Z * [new branch] python_compiled_autograd -> origin/python_compiled_autograd 2025-08-14T21:22:30.5529256Z * [new branch] qchip/export-D54134695 -> origin/qchip/export-D54134695 2025-08-14T21:22:30.5529525Z * [new branch] quint-bits -> origin/quint-bits 2025-08-14T21:22:30.5529799Z * [new branch] release/1.10 -> origin/release/1.10 2025-08-14T21:22:30.5530073Z * [new branch] release/1.11 -> origin/release/1.11 2025-08-14T21:22:30.5530982Z * [new branch] release/1.12 -> origin/release/1.12 2025-08-14T21:22:30.5531923Z * [new branch] release/1.13 -> origin/release/1.13 2025-08-14T21:22:30.5532898Z * [new branch] release/1.4 -> origin/release/1.4 2025-08-14T21:22:30.5533614Z * [new branch] release/1.4.1 -> origin/release/1.4.1 2025-08-14T21:22:30.5534545Z * [new branch] release/1.5 -> origin/release/1.5 2025-08-14T21:22:30.5535579Z * [new branch] release/1.6 -> origin/release/1.6 2025-08-14T21:22:30.5536665Z * [new branch] release/1.7 -> origin/release/1.7 2025-08-14T21:22:30.5537807Z * [new branch] release/1.8 -> origin/release/1.8 2025-08-14T21:22:30.5539030Z * [new branch] release/1.9 -> origin/release/1.9 2025-08-14T21:22:30.5544453Z * [new branch] release/2.0 -> origin/release/2.0 2025-08-14T21:22:30.5545462Z * [new branch] release/2.1 -> origin/release/2.1 2025-08-14T21:22:30.5546435Z * [new branch] release/2.2 -> origin/release/2.2 2025-08-14T21:22:30.5547682Z * [new branch] release/2.3 -> origin/release/2.3 2025-08-14T21:22:30.5549288Z * [new branch] release/2.4 -> origin/release/2.4 2025-08-14T21:22:30.5550723Z * [new branch] release/2.5 -> origin/release/2.5 2025-08-14T21:22:30.5551701Z * [new branch] release/2.6 -> origin/release/2.6 2025-08-14T21:22:30.5552749Z * [new branch] release/2.7 -> origin/release/2.7 2025-08-14T21:22:30.5553844Z * [new branch] release/2.8 -> origin/release/2.8 2025-08-14T21:22:30.5558237Z * [new branch] release_notes -> origin/release_notes 2025-08-14T21:22:30.5558593Z * [new branch] remove-actionable-label -> origin/remove-actionable-label 2025-08-14T21:22:30.5558872Z * [new branch] remove-ao -> origin/remove-ao 2025-08-14T21:22:30.5559301Z * [new branch] replace-pytorch-labs-20250812-195836 -> origin/replace-pytorch-labs-20250812-195836 2025-08-14T21:22:30.5559963Z * [new branch] replace-pytorch-labs-20250812-200248 -> origin/replace-pytorch-labs-20250812-200248 2025-08-14T21:22:30.5560712Z * [new branch] replace-pytorch-labs-20250812-200324 -> origin/replace-pytorch-labs-20250812-200324 2025-08-14T21:22:30.5561781Z * [new branch] replace-pytorch-labs-20250812-204020 -> origin/replace-pytorch-labs-20250812-204020 2025-08-14T21:22:30.5562790Z * [new branch] replace-pytorch-labs-20250812-204125 -> origin/replace-pytorch-labs-20250812-204125 2025-08-14T21:22:30.5563762Z * [new branch] replace-pytorch-labs-20250812-205624 -> origin/replace-pytorch-labs-20250812-205624 2025-08-14T21:22:30.5566169Z * [new branch] revert-131069-gh/krzysztofjordan/1/head -> origin/revert-131069-gh/krzysztofjordan/1/head 2025-08-14T21:22:30.5568010Z * [new branch] revert-131469-gh/andrewor14/51/head -> origin/revert-131469-gh/andrewor14/51/head 2025-08-14T21:22:30.5578399Z * [new branch] revert-156870-gh/skarjala/3/head -> origin/revert-156870-gh/skarjala/3/head 2025-08-14T21:22:30.5579775Z * [new branch] revert-157914-cherry-pick-157503-by-pytorch_bot_bot_ -> origin/revert-157914-cherry-pick-157503-by-pytorch_bot_bot_ 2025-08-14T21:22:30.5580489Z * [new branch] revert-direct-updates -> origin/revert-direct-updates 2025-08-14T21:22:30.5581371Z * [new branch] rocm-monitoring -> origin/rocm-monitoring 2025-08-14T21:22:30.5583163Z * [new branch] ryanguo99/cleanup-dynamo-expected-failures -> origin/ryanguo99/cleanup-dynamo-expected-failures 2025-08-14T21:22:30.5591496Z * [new branch] ryanguo99/fix-closure-var -> origin/ryanguo99/fix-closure-var 2025-08-14T21:22:30.5591860Z * [new branch] rzou/faketensor_bench -> origin/rzou/faketensor_bench 2025-08-14T21:22:30.5592248Z * [new branch] rzou/njt -> origin/rzou/njt 2025-08-14T21:22:30.5592563Z * [new branch] rzou/operator -> origin/rzou/operator 2025-08-14T21:22:30.5592862Z * [new branch] rzou/pca -> origin/rzou/pca 2025-08-14T21:22:30.5593164Z * [new branch] rzou/pipe_split -> origin/rzou/pipe_split 2025-08-14T21:22:30.5593460Z * [new branch] rzou/realprop -> origin/rzou/realprop 2025-08-14T21:22:30.5593804Z * [new branch] rzou/setup_context -> origin/rzou/setup_context 2025-08-14T21:22:30.5594219Z * [new branch] sanchitintel/refactor_aten_int8_woq_gemm -> origin/sanchitintel/refactor_aten_int8_woq_gemm 2025-08-14T21:22:30.5594921Z * [new branch] sanchitintel/weird_thing_with_test_cpu_select_algorithm -> origin/sanchitintel/weird_thing_with_test_cpu_select_algorithm 2025-08-14T21:22:30.5595211Z * [new branch] sapling-pr-archive-SS-JIA -> origin/sapling-pr-archive-SS-JIA 2025-08-14T21:22:30.5595468Z * [new branch] save -> origin/save 2025-08-14T21:22:30.5596920Z * [new branch] sdym/2.5.1 -> origin/sdym/2.5.1 2025-08-14T21:22:30.5601403Z * [new branch] seemethere-patch-1 -> origin/seemethere-patch-1 2025-08-14T21:22:30.5601582Z * [new branch] setup-torchci -> origin/setup-torchci 2025-08-14T21:22:30.5601843Z * [new branch] setupvllm -> origin/setupvllm 2025-08-14T21:22:30.5602023Z * [new branch] share_and_pin_fork -> origin/share_and_pin_fork 2025-08-14T21:22:30.5602251Z * [new branch] shengf/fx-xform-perf -> origin/shengf/fx-xform-perf 2025-08-14T21:22:30.5606337Z * [new branch] shikaili_fp8_allgather -> origin/shikaili_fp8_allgather 2025-08-14T21:22:30.5606703Z * [new branch] shoumikhin-patch-12 -> origin/shoumikhin-patch-12 2025-08-14T21:22:30.5607076Z * [new branch] simplify-fq-per-channel -> origin/simplify-fq-per-channel 2025-08-14T21:22:30.5607245Z * [new branch] solve-accuracy-fix -> origin/solve-accuracy-fix 2025-08-14T21:22:30.5607833Z * [new branch] sqzhang/flight4 -> origin/sqzhang/flight4 2025-08-14T21:22:30.5608809Z * [new branch] sqzhang/flight4plus -> origin/sqzhang/flight4plus 2025-08-14T21:22:30.5610158Z * [new branch] sraikund/record_funct_test -> origin/sraikund/record_funct_test 2025-08-14T21:22:30.5611355Z * [new branch] sraikund16/test -> origin/sraikund16/test 2025-08-14T21:22:30.5616404Z * [new branch] stablize-compilation-time -> origin/stablize-compilation-time 2025-08-14T21:22:30.5619982Z * [new branch] standalone-templates -> origin/standalone-templates 2025-08-14T21:22:30.5620986Z * [new branch] standalone_package_weights -> origin/standalone_package_weights 2025-08-14T21:22:30.5621891Z * [new branch] starterTaskUpdate -> origin/starterTaskUpdate 2025-08-14T21:22:30.5622821Z * [new branch] step2vllmsetup -> origin/step2vllmsetup 2025-08-14T21:22:30.5623750Z * [new branch] subgraph_fuse -> origin/subgraph_fuse 2025-08-14T21:22:30.5624959Z * [new branch] support-uv-in-collect_env -> origin/support-uv-in-collect_env 2025-08-14T21:22:30.5626230Z * [new branch] suryasub/fix-nccl-hang -> origin/suryasub/fix-nccl-hang 2025-08-14T21:22:30.5635208Z * [new branch] sve-poc -> origin/sve-poc 2025-08-14T21:22:30.5635564Z * [new branch] svekars-patch-1 -> origin/svekars-patch-1 2025-08-14T21:22:30.5635887Z * [new branch] svekars-patch-2 -> origin/svekars-patch-2 2025-08-14T21:22:30.5636284Z * [new branch] switch-bn -> origin/switch-bn 2025-08-14T21:22:30.5636573Z * [new branch] sympy-bottleneck-repro -> origin/sympy-bottleneck-repro 2025-08-14T21:22:30.5637000Z * [new branch] tenpercent/ck_inductor_gfx950 -> origin/tenpercent/ck_inductor_gfx950 2025-08-14T21:22:30.5637374Z * [new branch] tensordict_integration -> origin/tensordict_integration 2025-08-14T21:22:30.5637720Z * [new branch] test-half-migration-internally -> origin/test-half-migration-internally 2025-08-14T21:22:30.5638040Z * [new branch] test-internal-et -> origin/test-internal-et 2025-08-14T21:22:30.5638377Z * [new branch] test-move-conda-builds -> origin/test-move-conda-builds 2025-08-14T21:22:30.5638682Z * [new branch] test-myst-markdown-docstring -> origin/test-myst-markdown-docstring 2025-08-14T21:22:30.5638954Z * [new branch] test-old -> origin/test-old 2025-08-14T21:22:30.5639893Z * [new branch] test-vec-migration-internally -> origin/test-vec-migration-internally 2025-08-14T21:22:30.5649546Z * [new branch] test/bmm_heur -> origin/test/bmm_heur 2025-08-14T21:22:30.5649839Z * [new branch] test/inductor -> origin/test/inductor 2025-08-14T21:22:30.5650201Z * [new branch] tidy_performance_cyy -> origin/tidy_performance_cyy 2025-08-14T21:22:30.5650491Z * [new branch] torchtitan_ep -> origin/torchtitan_ep 2025-08-14T21:22:30.5650831Z * [new branch] trace_fsdp_torchtune_lora -> origin/trace_fsdp_torchtune_lora 2025-08-14T21:22:30.5651201Z * [new branch] traceable_fsdp_unit_tests -> origin/traceable_fsdp_unit_tests 2025-08-14T21:22:30.5651771Z * [new branch] trackMonitor -> origin/trackMonitor 2025-08-14T21:22:30.5652930Z * [new branch] tree_loop_vec_base -> origin/tree_loop_vec_base 2025-08-14T21:22:30.5653875Z * [new branch] tree_vec_base -> origin/tree_vec_base 2025-08-14T21:22:30.5654902Z * [new branch] triton-update -> origin/triton-update 2025-08-14T21:22:30.5659649Z * [new branch] triton_kernel -> origin/triton_kernel 2025-08-14T21:22:30.5659963Z * [new branch] triton_kernel_perf -> origin/triton_kernel_perf 2025-08-14T21:22:30.5660222Z * [new branch] try-runllm -> origin/try-runllm 2025-08-14T21:22:30.5660498Z * [new branch] type_dec -> origin/type_dec 2025-08-14T21:22:30.5660882Z * [new branch] udate-sphinx-dependancies -> origin/udate-sphinx-dependancies 2025-08-14T21:22:30.5661988Z * [new branch] update-audio-commit-hash/16307312222-1661-1 -> origin/update-audio-commit-hash/16307312222-1661-1 2025-08-14T21:22:30.5662897Z * [new branch] update-audio-commit-hash/16431348808-1673-1 -> origin/update-audio-commit-hash/16431348808-1673-1 2025-08-14T21:22:30.5663757Z * [new branch] update-audio-commit-hash/16510774365-1683-1 -> origin/update-audio-commit-hash/16510774365-1683-1 2025-08-14T21:22:30.5664646Z * [new branch] update-audio-commit-hash/16583472358-1693-1 -> origin/update-audio-commit-hash/16583472358-1693-1 2025-08-14T21:22:30.5665536Z * [new branch] update-audio-commit-hash/16663082088-1700-1 -> origin/update-audio-commit-hash/16663082088-1700-1 2025-08-14T21:22:30.5666643Z * [new branch] update-audio-commit-hash/16737365217-1704-1 -> origin/update-audio-commit-hash/16737365217-1704-1 2025-08-14T21:22:30.5667742Z * [new branch] update-audio-commit-hash/16791960928-1711-1 -> origin/update-audio-commit-hash/16791960928-1711-1 2025-08-14T21:22:30.5668889Z * [new branch] update-audio-commit-hash/16818882925-1712-1 -> origin/update-audio-commit-hash/16818882925-1712-1 2025-08-14T21:22:30.5675935Z * [new branch] update-audio-commit-hash/16895560422-1720-1 -> origin/update-audio-commit-hash/16895560422-1720-1 2025-08-14T21:22:30.5676421Z * [new branch] update-audio-commit-hash/16924174496-1738-1 -> origin/update-audio-commit-hash/16924174496-1738-1 2025-08-14T21:22:30.5676811Z * [new branch] update-dynamic-shapes-doc -> origin/update-dynamic-shapes-doc 2025-08-14T21:22:30.5677987Z * [new branch] update-executorch-commit-hash/15694981040-1626-1 -> origin/update-executorch-commit-hash/15694981040-1626-1 2025-08-14T21:22:30.5679155Z * [new branch] update-triton-commit-hash/13663274526-1487-2 -> origin/update-triton-commit-hash/13663274526-1487-2 2025-08-14T21:22:30.5680450Z * [new branch] update-vision-commit-hash/15336342773-1607-1 -> origin/update-vision-commit-hash/15336342773-1607-1 2025-08-14T21:22:30.5681839Z * [new branch] update-vllm-commit-hash/16431348808-1673-1 -> origin/update-vllm-commit-hash/16431348808-1673-1 2025-08-14T21:22:30.5682687Z * [new branch] update-vllm-commit-hash/16484773233-1682-1 -> origin/update-vllm-commit-hash/16484773233-1682-1 2025-08-14T21:22:30.5683557Z * [new branch] update-vllm-commit-hash/16510774365-1683-1 -> origin/update-vllm-commit-hash/16510774365-1683-1 2025-08-14T21:22:30.5684490Z * [new branch] update-vllm-commit-hash/16534031105-1684-1 -> origin/update-vllm-commit-hash/16534031105-1684-1 2025-08-14T21:22:30.5691251Z * [new branch] update-vllm-commit-hash/16545403308-1687-1 -> origin/update-vllm-commit-hash/16545403308-1687-1 2025-08-14T21:22:30.5691775Z * [new branch] update-vllm-commit-hash/16557202787-1688-1 -> origin/update-vllm-commit-hash/16557202787-1688-1 2025-08-14T21:22:30.5692172Z * [new branch] update-vllm-commit-hash/16583472358-1693-1 -> origin/update-vllm-commit-hash/16583472358-1693-1 2025-08-14T21:22:30.5692699Z * [new branch] update-vllm-commit-hash/16663082088-1700-1 -> origin/update-vllm-commit-hash/16663082088-1700-1 2025-08-14T21:22:30.5693285Z * [new branch] update-vllm-commit-hash/16737365217-1704-1 -> origin/update-vllm-commit-hash/16737365217-1704-1 2025-08-14T21:22:30.5693797Z * [new branch] update-vllm-commit-hash/16843157111-1713-1 -> origin/update-vllm-commit-hash/16843157111-1713-1 2025-08-14T21:22:30.5694086Z * [new branch] update-vllm-commit-hash/16855312394-1714-1 -> origin/update-vllm-commit-hash/16855312394-1714-1 2025-08-14T21:22:30.5694470Z * [new branch] update-vllm-commit-hash/16924174496-1738-1 -> origin/update-vllm-commit-hash/16924174496-1738-1 2025-08-14T21:22:30.5694986Z * [new branch] update-vllm-commit-hash/16952608705-1745-1 -> origin/update-vllm-commit-hash/16952608705-1745-1 2025-08-14T21:22:30.5695413Z * [new branch] update-xla-commit-hash/16260974441-194-1 -> origin/update-xla-commit-hash/16260974441-194-1 2025-08-14T21:22:30.5695922Z * [new branch] update-xla-commit-hash/16717126778-197-1 -> origin/update-xla-commit-hash/16717126778-197-1 2025-08-14T21:22:30.5696524Z * [new branch] update-xla-commit-hash/16873912760-198-1 -> origin/update-xla-commit-hash/16873912760-198-1 2025-08-14T21:22:30.5697564Z * [new branch] update_docs_torch_multinomial_issue#125388 -> origin/update_docs_torch_multinomial_issue#125388 2025-08-14T21:22:30.5698363Z * [new branch] update_executorch_pin -> origin/update_executorch_pin 2025-08-14T21:22:30.5703806Z * [new branch] update_slow_tests_1722488736 -> origin/update_slow_tests_1722488736 2025-08-14T21:22:30.5704749Z * [new branch] update_slow_tests_1722879173 -> origin/update_slow_tests_1722879173 2025-08-14T21:22:30.5705689Z * [new branch] update_slow_tests_1752478971 -> origin/update_slow_tests_1752478971 2025-08-14T21:22:30.5706728Z * [new branch] update_submodule_FBGEMM -> origin/update_submodule_FBGEMM 2025-08-14T21:22:30.5707629Z * [new branch] update_submodule_kineto -> origin/update_submodule_kineto 2025-08-14T21:22:30.5708598Z * [new branch] update_submodule_tensorpipe -> origin/update_submodule_tensorpipe 2025-08-14T21:22:30.5710044Z * [new branch] v0.1.2 -> origin/v0.1.2 2025-08-14T21:22:30.5711153Z * [new branch] v1.0.1 -> origin/v1.0.1 2025-08-14T21:22:30.5712128Z * [new branch] v1.0.3 -> origin/v1.0.3 2025-08-14T21:22:30.5713236Z * [new branch] v1.1.0 -> origin/v1.1.0 2025-08-14T21:22:30.5721600Z * [new branch] v1.2.0 -> origin/v1.2.0 2025-08-14T21:22:30.5721876Z * [new branch] v1.3.0 -> origin/v1.3.0 2025-08-14T21:22:30.5722134Z * [new branch] v1.3.1 -> origin/v1.3.1 2025-08-14T21:22:30.5722432Z * [new branch] validate_fn -> origin/validate_fn 2025-08-14T21:22:30.5722781Z * [new branch] validations_2.6 -> origin/validations_2.6 2025-08-14T21:22:30.5723069Z * [new branch] validations_2.8 -> origin/validations_2.8 2025-08-14T21:22:30.5723375Z * [new branch] viable/strict -> origin/viable/strict 2025-08-14T21:22:30.5723678Z * [new branch] vllmbuildci -> origin/vllmbuildci 2025-08-14T21:22:30.5723961Z * [new branch] vllmpin -> origin/vllmpin 2025-08-14T21:22:30.5724283Z * [new branch] vllmpintest -> origin/vllmpintest 2025-08-14T21:22:30.5725408Z * [new branch] wdvr-patch-1 -> origin/wdvr-patch-1 2025-08-14T21:22:30.5726465Z * [new branch] wdvr-patch-2 -> origin/wdvr-patch-2 2025-08-14T21:22:30.5731956Z * [new branch] wdvr/conda_devcontainer -> origin/wdvr/conda_devcontainer 2025-08-14T21:22:30.5737317Z * [new branch] wdvr/fix_logging_test -> origin/wdvr/fix_logging_test 2025-08-14T21:22:30.5738537Z * [new branch] wdvr/iss_145259 -> origin/wdvr/iss_145259 2025-08-14T21:22:30.5739561Z * [new branch] weight_sharing_cpp -> origin/weight_sharing_cpp 2025-08-14T21:22:30.5740988Z * [new branch] whc/flight -> origin/whc/flight 2025-08-14T21:22:30.5742085Z * [new branch] whc/flight4 -> origin/whc/flight4 2025-08-14T21:22:30.5746800Z * [new branch] whc/flight51 -> origin/whc/flight51 2025-08-14T21:22:30.5747069Z * [new branch] whc/flight53 -> origin/whc/flight53 2025-08-14T21:22:30.5747358Z * [new branch] whc/p2phang -> origin/whc/p2phang 2025-08-14T21:22:30.5748053Z * [new branch] whc/stage2 -> origin/whc/stage2 2025-08-14T21:22:30.5751287Z * [new branch] whc/uneven -> origin/whc/uneven 2025-08-14T21:22:30.5751497Z * [new branch] whc/uneven-merge -> origin/whc/uneven-merge 2025-08-14T21:22:30.5751766Z * [new branch] win_warnings -> origin/win_warnings 2025-08-14T21:22:30.5752705Z * [new branch] workonoldcommit -> origin/workonoldcommit 2025-08-14T21:22:30.5754020Z * [new branch] wwen/programming-model-2.8 -> origin/wwen/programming-model-2.8 2025-08-14T21:22:30.5755249Z * [new branch] xmfan/ca_0516 -> origin/xmfan/ca_0516 2025-08-14T21:22:30.5756124Z * [new branch] xmfan/ca_1051b93192 -> origin/xmfan/ca_1051b93192 2025-08-14T21:22:30.5763671Z * [new branch] xmfan/ca_1a722f62c248391fc4a542e8851a5559aa356ae8 -> origin/xmfan/ca_1a722f62c248391fc4a542e8851a5559aa356ae8 2025-08-14T21:22:30.5763996Z * [new branch] xmfan/ca_5a2be192d1 -> origin/xmfan/ca_5a2be192d1 2025-08-14T21:22:30.5764214Z * [new branch] xmfan/ca_9d59b516e9 -> origin/xmfan/ca_9d59b516e9 2025-08-14T21:22:30.5764409Z * [new branch] xmfan/ca_api -> origin/xmfan/ca_api 2025-08-14T21:22:30.5764593Z * [new branch] xmfan/ca_apr8 -> origin/xmfan/ca_apr8 2025-08-14T21:22:30.5764777Z * [new branch] xmfan/ca_base -> origin/xmfan/ca_base 2025-08-14T21:22:30.5765000Z * [new branch] xmfan/ca_cudagraphs -> origin/xmfan/ca_cudagraphs 2025-08-14T21:22:30.5765276Z * [new branch] xmfan/ca_dynamic -> origin/xmfan/ca_dynamic 2025-08-14T21:22:30.5765717Z * [new branch] xmfan/ca_fix_dyn -> origin/xmfan/ca_fix_dyn 2025-08-14T21:22:30.5766649Z * [new branch] xmfan/ca_fix_lowering -> origin/xmfan/ca_fix_lowering 2025-08-14T21:22:30.5767599Z * [new branch] xmfan/ca_fix_polyfills -> origin/xmfan/ca_fix_polyfills 2025-08-14T21:22:30.5768361Z * [new branch] xmfan/ca_jan3 -> origin/xmfan/ca_jan3 2025-08-14T21:22:30.5769241Z * [new branch] xmfan/ca_jun18 -> origin/xmfan/ca_jun18 2025-08-14T21:22:30.5770169Z * [new branch] xmfan/ca_jun24 -> origin/xmfan/ca_jun24 2025-08-14T21:22:30.5771061Z * [new branch] xmfan/ca_mem_base -> origin/xmfan/ca_mem_base 2025-08-14T21:22:30.5776494Z * [new branch] xmfan/ca_mem_fix -> origin/xmfan/ca_mem_fix 2025-08-14T21:22:30.5777470Z * [new branch] xmfan/ca_memory_fix -> origin/xmfan/ca_memory_fix 2025-08-14T21:22:30.5778483Z * [new branch] xmfan/ca_memory_fix_rebased -> origin/xmfan/ca_memory_fix_rebased 2025-08-14T21:22:30.5779420Z * [new branch] xmfan/ca_memory_fix_rebased2 -> origin/xmfan/ca_memory_fix_rebased2 2025-08-14T21:22:30.5780293Z * [new branch] xmfan/ca_move_to_cuda -> origin/xmfan/ca_move_to_cuda 2025-08-14T21:22:30.5781168Z * [new branch] xmfan/ca_nested -> origin/xmfan/ca_nested 2025-08-14T21:22:30.5782065Z * [new branch] xmfan/ca_overhead -> origin/xmfan/ca_overhead 2025-08-14T21:22:30.5783060Z * [new branch] xmfan/ca_overhead_0eba7e5451 -> origin/xmfan/ca_overhead_0eba7e5451 2025-08-14T21:22:30.5783911Z * [new branch] xmfan/ca_scalar -> origin/xmfan/ca_scalar 2025-08-14T21:22:30.5784964Z * [new branch] xmfan/ca_subclass_mem_fix -> origin/xmfan/ca_subclass_mem_fix 2025-08-14T21:22:30.5785797Z * [new branch] xmfan/ca_warm_mem -> origin/xmfan/ca_warm_mem 2025-08-14T21:22:30.5790179Z * [new branch] xmfan/ca_warm_mem_base -> origin/xmfan/ca_warm_mem_base 2025-08-14T21:22:30.5790349Z * [new branch] xmfan/cacu_jun18 -> origin/xmfan/cacu_jun18 2025-08-14T21:22:30.5790514Z * [new branch] xmfan/cacu_jun19 -> origin/xmfan/cacu_jun19 2025-08-14T21:22:30.5790681Z * [new branch] xmfan/cacu_jun4 -> origin/xmfan/cacu_jun4 2025-08-14T21:22:30.5790831Z * [new branch] xmfan/cacu_may27 -> origin/xmfan/cacu_may27 2025-08-14T21:22:30.5791456Z * [new branch] xmfan/circular_dep -> origin/xmfan/circular_dep 2025-08-14T21:22:30.5792533Z * [new branch] xmfan/compiled_autograd_feb_29 -> origin/xmfan/compiled_autograd_feb_29 2025-08-14T21:22:30.5793471Z * [new branch] xmfan/compiled_autograd_graph_breaks -> origin/xmfan/compiled_autograd_graph_breaks 2025-08-14T21:22:30.5794497Z * [new branch] xmfan/disable_duck_shape -> origin/xmfan/disable_duck_shape 2025-08-14T21:22:30.5795411Z * [new branch] xmfan/fca_cpp_node_passthrough -> origin/xmfan/fca_cpp_node_passthrough 2025-08-14T21:22:30.5796308Z * [new branch] xmfan/issue_123374 -> origin/xmfan/issue_123374 2025-08-14T21:22:30.5797488Z * [new branch] xmfan/post_3945954741e2d37023c5d6954f9483008e0892f9 -> origin/xmfan/post_3945954741e2d37023c5d6954f9483008e0892f9 2025-08-14T21:22:30.5798372Z * [new branch] xmfan/pre_3945954741e2d37023c5d6954f9483008e0892f9 -> origin/xmfan/pre_3945954741e2d37023c5d6954f9483008e0892f9 2025-08-14T21:22:30.5799001Z * [new branch] xmfan/segfault_test -> origin/xmfan/segfault_test 2025-08-14T21:22:30.5802361Z * [new branch] xmfan/single_step -> origin/xmfan/single_step 2025-08-14T21:22:30.5808986Z * [new branch] xmfan/sth_0829 -> origin/xmfan/sth_0829 2025-08-14T21:22:30.5809262Z * [new branch] xmfan/test -> origin/xmfan/test 2025-08-14T21:22:30.5809676Z * [new branch] y-do-we-have-7-build-systems -> origin/y-do-we-have-7-build-systems 2025-08-14T21:22:30.5809938Z * [new branch] yguo/debug-0226-constexpr -> origin/yguo/debug-0226-constexpr 2025-08-14T21:22:30.5810282Z * [new branch] yguo/new_latest_changes -> origin/yguo/new_latest_changes 2025-08-14T21:22:30.5811043Z * [new branch] yguo/patch_constexpr_changes -> origin/yguo/patch_constexpr_changes 2025-08-14T21:22:30.5812042Z * [new branch] yihan_quantization -> origin/yihan_quantization 2025-08-14T21:22:30.5813446Z * [new branch] yiming/add_nativert_benchmark -> origin/yiming/add_nativert_benchmark 2025-08-14T21:22:30.5814199Z * [new branch] yiming/bootcamp -> origin/yiming/bootcamp 2025-08-14T21:22:30.5819232Z * [new branch] zainr/canary-test -> origin/zainr/canary-test 2025-08-14T21:22:30.5819596Z * [new branch] zainr/cleanup-gh-runners -> origin/zainr/cleanup-gh-runners 2025-08-14T21:22:30.5819881Z * [new branch] zainr/fixlint -> origin/zainr/fixlint 2025-08-14T21:22:30.5820304Z * [new branch] zainr/git-push-v2 -> origin/zainr/git-push-v2 2025-08-14T21:22:30.5820558Z * [new branch] zainr/lint-py3.9 -> origin/zainr/lint-py3.9 2025-08-14T21:22:30.5820877Z * [new branch] zainr/mypy15-claude -> origin/zainr/mypy15-claude 2025-08-14T21:22:30.5821646Z * [new branch] zainr/pre-push-hooks -> origin/zainr/pre-push-hooks 2025-08-14T21:22:30.5822794Z * [new branch] zainr/pull-migration-c -> origin/zainr/pull-migration-c 2025-08-14T21:22:30.5823753Z * [new branch] zainr/test2 -> origin/zainr/test2 2025-08-14T21:22:30.5824943Z * [new branch] zainr/unstable -> origin/zainr/unstable 2025-08-14T21:22:30.5825832Z * [new branch] zainr/unstable-xla -> origin/zainr/unstable-xla 2025-08-14T21:22:30.5826766Z * [new branch] zainr/uv-pip-fix -> origin/zainr/uv-pip-fix 2025-08-14T21:22:30.5827771Z * [new branch] zainr/vs-aarch64 -> origin/zainr/vs-aarch64 2025-08-14T21:22:30.5828944Z * [new branch] zasdfgbnm-patch-3 -> origin/zasdfgbnm-patch-3 2025-08-14T21:22:30.5834276Z * [new branch] zb2p -> origin/zb2p 2025-08-14T21:22:30.5835470Z * [new branch] zdevito-patch-1 -> origin/zdevito-patch-1 2025-08-14T21:22:30.5836443Z * [new branch] zeros-and-scatter-part2 -> origin/zeros-and-scatter-part2 2025-08-14T21:22:30.5838069Z * [new branch] zhxchen17/nativert/0 -> origin/zhxchen17/nativert/0 2025-08-14T21:22:30.5839244Z * [new branch] zhxchen17/scratch/0 -> origin/zhxchen17/scratch/0 2025-08-14T21:22:30.5840588Z * [new branch] zhxhcen17/moodycamel -> origin/zhxhcen17/moodycamel 2025-08-14T21:22:30.5842033Z * [new branch] zxiiro/bazel -> origin/zxiiro/bazel 2025-08-14T21:22:30.5843016Z * [new branch] zxiiro/get-hardware -> origin/zxiiro/get-hardware 2025-08-14T21:22:30.5848196Z * [new branch] zxiiro/main -> origin/zxiiro/main 2025-08-14T21:22:30.5848469Z * [new branch] zxiiro/test -> origin/zxiiro/test 2025-08-14T21:22:30.5849443Z * [new tag] bc2caa7fdf006894eff7af936babde69ab5a40f8-huydhn-debug -> bc2caa7fdf006894eff7af936babde69ab5a40f8-huydhn-debug 2025-08-14T21:22:30.5849651Z * [new tag] ci/binaries/77164 -> ci/binaries/77164 2025-08-14T21:22:30.5849802Z * [new tag] ciflow/binaries/138996 -> ciflow/binaries/138996 2025-08-14T21:22:30.5849963Z * [new tag] ciflow/binaries/143959 -> ciflow/binaries/143959 2025-08-14T21:22:30.5850108Z * [new tag] ciflow/binaries/154595 -> ciflow/binaries/154595 2025-08-14T21:22:30.5850460Z * [new tag] ciflow/binaries/156049 -> ciflow/binaries/156049 2025-08-14T21:22:30.5851148Z * [new tag] ciflow/binaries/156712 -> ciflow/binaries/156712 2025-08-14T21:22:30.5851740Z * [new tag] ciflow/binaries/157432 -> ciflow/binaries/157432 2025-08-14T21:22:30.5852330Z * [new tag] ciflow/binaries/157685 -> ciflow/binaries/157685 2025-08-14T21:22:30.5852937Z * [new tag] ciflow/binaries/157689 -> ciflow/binaries/157689 2025-08-14T21:22:30.5853546Z * [new tag] ciflow/binaries/158104 -> ciflow/binaries/158104 2025-08-14T21:22:30.5854369Z * [new tag] ciflow/binaries/158623 -> ciflow/binaries/158623 2025-08-14T21:22:30.5855056Z * [new tag] ciflow/binaries/159827 -> ciflow/binaries/159827 2025-08-14T21:22:30.5856017Z * [new tag] ciflow/binaries/159869 -> ciflow/binaries/159869 2025-08-14T21:22:30.5856956Z * [new tag] ciflow/binaries/160593 -> ciflow/binaries/160593 2025-08-14T21:22:30.5857743Z * [new tag] ciflow/binaries_libtorch/143959 -> ciflow/binaries_libtorch/143959 2025-08-14T21:22:30.5858542Z * [new tag] ciflow/binaries_libtorch/156049 -> ciflow/binaries_libtorch/156049 2025-08-14T21:22:30.5863409Z * [new tag] ciflow/binaries_libtorch/157432 -> ciflow/binaries_libtorch/157432 2025-08-14T21:22:30.5864035Z * [new tag] ciflow/binaries_wheel/143959 -> ciflow/binaries_wheel/143959 2025-08-14T21:22:30.5864590Z * [new tag] ciflow/binaries_wheel/156049 -> ciflow/binaries_wheel/156049 2025-08-14T21:22:30.5865154Z * [new tag] ciflow/binaries_wheel/157432 -> ciflow/binaries_wheel/157432 2025-08-14T21:22:30.5865874Z * [new tag] ciflow/binaries_wheel/158733 -> ciflow/binaries_wheel/158733 2025-08-14T21:22:30.5866538Z * [new tag] ciflow/binaries_wheel/160301 -> ciflow/binaries_wheel/160301 2025-08-14T21:22:30.5867283Z * [new tag] ciflow/binaries_wheel/160496 -> ciflow/binaries_wheel/160496 2025-08-14T21:22:30.5868164Z * [new tag] ciflow/h100-distributed/156703 -> ciflow/h100-distributed/156703 2025-08-14T21:22:30.5868893Z * [new tag] ciflow/h100-symm-mem/151845 -> ciflow/h100-symm-mem/151845 2025-08-14T21:22:30.5869495Z * [new tag] ciflow/h100-symm-mem/155923 -> ciflow/h100-symm-mem/155923 2025-08-14T21:22:30.5870036Z * [new tag] ciflow/h100-symm-mem/157635 -> ciflow/h100-symm-mem/157635 2025-08-14T21:22:30.5870756Z * [new tag] ciflow/h100-symm-mem/159118 -> ciflow/h100-symm-mem/159118 2025-08-14T21:22:30.5871557Z * [new tag] ciflow/h100-symm-mem/159562 -> ciflow/h100-symm-mem/159562 2025-08-14T21:22:30.5872090Z * [new tag] ciflow/h100-symm-mem/159889 -> ciflow/h100-symm-mem/159889 2025-08-14T21:22:30.5881218Z * [new tag] ciflow/h100/159158 -> ciflow/h100/159158 2025-08-14T21:22:30.5881489Z * [new tag] ciflow/h100/160450 -> ciflow/h100/160450 2025-08-14T21:22:30.5881665Z * [new tag] ciflow/h100/160480 -> ciflow/h100/160480 2025-08-14T21:22:30.5881824Z * [new tag] ciflow/h100/160614 -> ciflow/h100/160614 2025-08-14T21:22:30.5882238Z * [new tag] ciflow/inductor-perf-test-nightly-rocm/151845 -> ciflow/inductor-perf-test-nightly-rocm/151845 2025-08-14T21:22:30.5882623Z * [new tag] ciflow/inductor-perf-test-nightly-rocm/160538 -> ciflow/inductor-perf-test-nightly-rocm/160538 2025-08-14T21:22:30.5882994Z * [new tag] ciflow/inductor-perf-test-nightly-x86-zen/156599 -> ciflow/inductor-perf-test-nightly-x86-zen/156599 2025-08-14T21:22:30.5883191Z * [new tag] ciflow/inductor-periodic/160406 -> ciflow/inductor-periodic/160406 2025-08-14T21:22:30.5883384Z * [new tag] ciflow/inductor-periodic/160538 -> ciflow/inductor-periodic/160538 2025-08-14T21:22:30.5883564Z * [new tag] ciflow/inductor-rocm/151845 -> ciflow/inductor-rocm/151845 2025-08-14T21:22:30.5883726Z * [new tag] ciflow/inductor-rocm/159158 -> ciflow/inductor-rocm/159158 2025-08-14T21:22:30.5883898Z * [new tag] ciflow/inductor-rocm/160073 -> ciflow/inductor-rocm/160073 2025-08-14T21:22:30.5884055Z * [new tag] ciflow/inductor-rocm/160538 -> ciflow/inductor-rocm/160538 2025-08-14T21:22:30.5884205Z * [new tag] ciflow/inductor/134881 -> ciflow/inductor/134881 2025-08-14T21:22:30.5884358Z * [new tag] ciflow/inductor/137400 -> ciflow/inductor/137400 2025-08-14T21:22:30.5884507Z * [new tag] ciflow/inductor/144516 -> ciflow/inductor/144516 2025-08-14T21:22:30.5884882Z * [new tag] ciflow/inductor/146506 -> ciflow/inductor/146506 2025-08-14T21:22:30.5885434Z * [new tag] ciflow/inductor/147360 -> ciflow/inductor/147360 2025-08-14T21:22:30.5886054Z * [new tag] ciflow/inductor/147990 -> ciflow/inductor/147990 2025-08-14T21:22:30.5886582Z * [new tag] ciflow/inductor/148180 -> ciflow/inductor/148180 2025-08-14T21:22:30.5891400Z * [new tag] ciflow/inductor/148328 -> ciflow/inductor/148328 2025-08-14T21:22:30.5896381Z * [new tag] ciflow/inductor/148484 -> ciflow/inductor/148484 2025-08-14T21:22:30.5896973Z * [new tag] ciflow/inductor/148492 -> ciflow/inductor/148492 2025-08-14T21:22:30.5897557Z * [new tag] ciflow/inductor/150302 -> ciflow/inductor/150302 2025-08-14T21:22:30.5898405Z * [new tag] ciflow/inductor/151845 -> ciflow/inductor/151845 2025-08-14T21:22:30.5899185Z * [new tag] ciflow/inductor/152198 -> ciflow/inductor/152198 2025-08-14T21:22:30.5899947Z * [new tag] ciflow/inductor/152624 -> ciflow/inductor/152624 2025-08-14T21:22:30.5900602Z * [new tag] ciflow/inductor/153966 -> ciflow/inductor/153966 2025-08-14T21:22:30.5901221Z * [new tag] ciflow/inductor/154193 -> ciflow/inductor/154193 2025-08-14T21:22:30.5906287Z * [new tag] ciflow/inductor/154650 -> ciflow/inductor/154650 2025-08-14T21:22:30.5906569Z * [new tag] ciflow/inductor/154694 -> ciflow/inductor/154694 2025-08-14T21:22:30.5906823Z * [new tag] ciflow/inductor/155072 -> ciflow/inductor/155072 2025-08-14T21:22:30.5907090Z * [new tag] ciflow/inductor/155152 -> ciflow/inductor/155152 2025-08-14T21:22:30.5907368Z * [new tag] ciflow/inductor/155153 -> ciflow/inductor/155153 2025-08-14T21:22:30.5907645Z * [new tag] ciflow/inductor/155154 -> ciflow/inductor/155154 2025-08-14T21:22:30.5908144Z * [new tag] ciflow/inductor/155501 -> ciflow/inductor/155501 2025-08-14T21:22:30.5908866Z * [new tag] ciflow/inductor/155502 -> ciflow/inductor/155502 2025-08-14T21:22:30.5909392Z * [new tag] ciflow/inductor/155503 -> ciflow/inductor/155503 2025-08-14T21:22:30.5910281Z * [new tag] ciflow/inductor/155504 -> ciflow/inductor/155504 2025-08-14T21:22:30.5911326Z * [new tag] ciflow/inductor/155557 -> ciflow/inductor/155557 2025-08-14T21:22:30.5911936Z * [new tag] ciflow/inductor/155608 -> ciflow/inductor/155608 2025-08-14T21:22:30.5912591Z * [new tag] ciflow/inductor/155923 -> ciflow/inductor/155923 2025-08-14T21:22:30.5913218Z * [new tag] ciflow/inductor/155928 -> ciflow/inductor/155928 2025-08-14T21:22:30.5914064Z * [new tag] ciflow/inductor/155958 -> ciflow/inductor/155958 2025-08-14T21:22:30.5914643Z * [new tag] ciflow/inductor/156049 -> ciflow/inductor/156049 2025-08-14T21:22:30.5915292Z * [new tag] ciflow/inductor/156851 -> ciflow/inductor/156851 2025-08-14T21:22:30.5915910Z * [new tag] ciflow/inductor/156967 -> ciflow/inductor/156967 2025-08-14T21:22:30.5920523Z * [new tag] ciflow/inductor/157148 -> ciflow/inductor/157148 2025-08-14T21:22:30.5920690Z * [new tag] ciflow/inductor/157149 -> ciflow/inductor/157149 2025-08-14T21:22:30.5920860Z * [new tag] ciflow/inductor/157152 -> ciflow/inductor/157152 2025-08-14T21:22:30.5921008Z * [new tag] ciflow/inductor/157542 -> ciflow/inductor/157542 2025-08-14T21:22:30.5921202Z * [new tag] ciflow/inductor/157572 -> ciflow/inductor/157572 2025-08-14T21:22:30.5921346Z * [new tag] ciflow/inductor/157635 -> ciflow/inductor/157635 2025-08-14T21:22:30.5921497Z * [new tag] ciflow/inductor/157685 -> ciflow/inductor/157685 2025-08-14T21:22:30.5921699Z * [new tag] ciflow/inductor/157686 -> ciflow/inductor/157686 2025-08-14T21:22:30.5921928Z * [new tag] ciflow/inductor/157689 -> ciflow/inductor/157689 2025-08-14T21:22:30.5922429Z * [new tag] ciflow/inductor/157699 -> ciflow/inductor/157699 2025-08-14T21:22:30.5925681Z * [new tag] ciflow/inductor/157743 -> ciflow/inductor/157743 2025-08-14T21:22:30.5925942Z * [new tag] ciflow/inductor/157944 -> ciflow/inductor/157944 2025-08-14T21:22:30.5926212Z * [new tag] ciflow/inductor/157971 -> ciflow/inductor/157971 2025-08-14T21:22:30.5926487Z * [new tag] ciflow/inductor/157994 -> ciflow/inductor/157994 2025-08-14T21:22:30.5926761Z * [new tag] ciflow/inductor/158061 -> ciflow/inductor/158061 2025-08-14T21:22:30.5927042Z * [new tag] ciflow/inductor/158091 -> ciflow/inductor/158091 2025-08-14T21:22:30.5927330Z * [new tag] ciflow/inductor/158097 -> ciflow/inductor/158097 2025-08-14T21:22:30.5927887Z * [new tag] ciflow/inductor/158098 -> ciflow/inductor/158098 2025-08-14T21:22:30.5928496Z * [new tag] ciflow/inductor/158104 -> ciflow/inductor/158104 2025-08-14T21:22:30.5929127Z * [new tag] ciflow/inductor/158168 -> ciflow/inductor/158168 2025-08-14T21:22:30.5929857Z * [new tag] ciflow/inductor/158250 -> ciflow/inductor/158250 2025-08-14T21:22:30.5930486Z * [new tag] ciflow/inductor/158321 -> ciflow/inductor/158321 2025-08-14T21:22:30.5935380Z * [new tag] ciflow/inductor/158609 -> ciflow/inductor/158609 2025-08-14T21:22:30.5936125Z * [new tag] ciflow/inductor/158647 -> ciflow/inductor/158647 2025-08-14T21:22:30.5936763Z * [new tag] ciflow/inductor/158914 -> ciflow/inductor/158914 2025-08-14T21:22:30.5937670Z * [new tag] ciflow/inductor/158932 -> ciflow/inductor/158932 2025-08-14T21:22:30.5938164Z * [new tag] ciflow/inductor/158987 -> ciflow/inductor/158987 2025-08-14T21:22:30.5938837Z * [new tag] ciflow/inductor/159009 -> ciflow/inductor/159009 2025-08-14T21:22:30.5939444Z * [new tag] ciflow/inductor/159010 -> ciflow/inductor/159010 2025-08-14T21:22:30.5940113Z * [new tag] ciflow/inductor/159093 -> ciflow/inductor/159093 2025-08-14T21:22:30.5940722Z * [new tag] ciflow/inductor/159158 -> ciflow/inductor/159158 2025-08-14T21:22:30.5941352Z * [new tag] ciflow/inductor/159197 -> ciflow/inductor/159197 2025-08-14T21:22:30.5942207Z * [new tag] ciflow/inductor/159274 -> ciflow/inductor/159274 2025-08-14T21:22:30.5942858Z * [new tag] ciflow/inductor/159281 -> ciflow/inductor/159281 2025-08-14T21:22:30.5943499Z * [new tag] ciflow/inductor/159329 -> ciflow/inductor/159329 2025-08-14T21:22:30.5944101Z * [new tag] ciflow/inductor/159361 -> ciflow/inductor/159361 2025-08-14T21:22:30.5944734Z * [new tag] ciflow/inductor/159365 -> ciflow/inductor/159365 2025-08-14T21:22:30.5954051Z * [new tag] ciflow/inductor/159366 -> ciflow/inductor/159366 2025-08-14T21:22:30.5954355Z * [new tag] ciflow/inductor/159367 -> ciflow/inductor/159367 2025-08-14T21:22:30.5954667Z * [new tag] ciflow/inductor/159368 -> ciflow/inductor/159368 2025-08-14T21:22:30.5954963Z * [new tag] ciflow/inductor/159473 -> ciflow/inductor/159473 2025-08-14T21:22:30.5955263Z * [new tag] ciflow/inductor/159483 -> ciflow/inductor/159483 2025-08-14T21:22:30.5955557Z * [new tag] ciflow/inductor/159508 -> ciflow/inductor/159508 2025-08-14T21:22:30.5955827Z * [new tag] ciflow/inductor/159523 -> ciflow/inductor/159523 2025-08-14T21:22:30.5956279Z * [new tag] ciflow/inductor/159678 -> ciflow/inductor/159678 2025-08-14T21:22:30.5956536Z * [new tag] ciflow/inductor/159691 -> ciflow/inductor/159691 2025-08-14T21:22:30.5956811Z * [new tag] ciflow/inductor/159778 -> ciflow/inductor/159778 2025-08-14T21:22:30.5957086Z * [new tag] ciflow/inductor/159786 -> ciflow/inductor/159786 2025-08-14T21:22:30.5957353Z * [new tag] ciflow/inductor/159817 -> ciflow/inductor/159817 2025-08-14T21:22:30.5957638Z * [new tag] ciflow/inductor/159842 -> ciflow/inductor/159842 2025-08-14T21:22:30.5957903Z * [new tag] ciflow/inductor/159864 -> ciflow/inductor/159864 2025-08-14T21:22:30.5958176Z * [new tag] ciflow/inductor/159865 -> ciflow/inductor/159865 2025-08-14T21:22:30.5958462Z * [new tag] ciflow/inductor/159869 -> ciflow/inductor/159869 2025-08-14T21:22:30.5958642Z * [new tag] ciflow/inductor/159875 -> ciflow/inductor/159875 2025-08-14T21:22:30.5958792Z * [new tag] ciflow/inductor/159889 -> ciflow/inductor/159889 2025-08-14T21:22:30.5958998Z * [new tag] ciflow/inductor/159902 -> ciflow/inductor/159902 2025-08-14T21:22:30.5959275Z * [new tag] ciflow/inductor/159923 -> ciflow/inductor/159923 2025-08-14T21:22:30.5959811Z * [new tag] ciflow/inductor/159944 -> ciflow/inductor/159944 2025-08-14T21:22:30.5968252Z * [new tag] ciflow/inductor/160004 -> ciflow/inductor/160004 2025-08-14T21:22:30.5968547Z * [new tag] ciflow/inductor/160080 -> ciflow/inductor/160080 2025-08-14T21:22:30.5968847Z * [new tag] ciflow/inductor/160108 -> ciflow/inductor/160108 2025-08-14T21:22:30.5969146Z * [new tag] ciflow/inductor/160109 -> ciflow/inductor/160109 2025-08-14T21:22:30.5969571Z * [new tag] ciflow/inductor/160111 -> ciflow/inductor/160111 2025-08-14T21:22:30.5969883Z * [new tag] ciflow/inductor/160113 -> ciflow/inductor/160113 2025-08-14T21:22:30.5970170Z * [new tag] ciflow/inductor/160127 -> ciflow/inductor/160127 2025-08-14T21:22:30.5970479Z * [new tag] ciflow/inductor/160131 -> ciflow/inductor/160131 2025-08-14T21:22:30.5970756Z * [new tag] ciflow/inductor/160132 -> ciflow/inductor/160132 2025-08-14T21:22:30.5971193Z * [new tag] ciflow/inductor/160136 -> ciflow/inductor/160136 2025-08-14T21:22:30.5971793Z * [new tag] ciflow/inductor/160138 -> ciflow/inductor/160138 2025-08-14T21:22:30.5972442Z * [new tag] ciflow/inductor/160151 -> ciflow/inductor/160151 2025-08-14T21:22:30.5973103Z * [new tag] ciflow/inductor/160152 -> ciflow/inductor/160152 2025-08-14T21:22:30.5973708Z * [new tag] ciflow/inductor/160154 -> ciflow/inductor/160154 2025-08-14T21:22:30.5974452Z * [new tag] ciflow/inductor/160156 -> ciflow/inductor/160156 2025-08-14T21:22:30.5978644Z * [new tag] ciflow/inductor/160161 -> ciflow/inductor/160161 2025-08-14T21:22:30.5978936Z * [new tag] ciflow/inductor/160166 -> ciflow/inductor/160166 2025-08-14T21:22:30.5979209Z * [new tag] ciflow/inductor/160168 -> ciflow/inductor/160168 2025-08-14T21:22:30.5979472Z * [new tag] ciflow/inductor/160174 -> ciflow/inductor/160174 2025-08-14T21:22:30.5979740Z * [new tag] ciflow/inductor/160181 -> ciflow/inductor/160181 2025-08-14T21:22:30.5980017Z * [new tag] ciflow/inductor/160183 -> ciflow/inductor/160183 2025-08-14T21:22:30.5980283Z * [new tag] ciflow/inductor/160190 -> ciflow/inductor/160190 2025-08-14T21:22:30.5980567Z * [new tag] ciflow/inductor/160198 -> ciflow/inductor/160198 2025-08-14T21:22:30.5981254Z * [new tag] ciflow/inductor/160201 -> ciflow/inductor/160201 2025-08-14T21:22:30.5981869Z * [new tag] ciflow/inductor/160209 -> ciflow/inductor/160209 2025-08-14T21:22:30.5982750Z * [new tag] ciflow/inductor/160218 -> ciflow/inductor/160218 2025-08-14T21:22:30.5983359Z * [new tag] ciflow/inductor/160239 -> ciflow/inductor/160239 2025-08-14T21:22:30.5983991Z * [new tag] ciflow/inductor/160250 -> ciflow/inductor/160250 2025-08-14T21:22:30.5984670Z * [new tag] ciflow/inductor/160253 -> ciflow/inductor/160253 2025-08-14T21:22:30.5985310Z * [new tag] ciflow/inductor/160266 -> ciflow/inductor/160266 2025-08-14T21:22:30.5985940Z * [new tag] ciflow/inductor/160282 -> ciflow/inductor/160282 2025-08-14T21:22:30.5986581Z * [new tag] ciflow/inductor/160298 -> ciflow/inductor/160298 2025-08-14T21:22:30.5987231Z * [new tag] ciflow/inductor/160301 -> ciflow/inductor/160301 2025-08-14T21:22:30.5988014Z * [new tag] ciflow/inductor/160310 -> ciflow/inductor/160310 2025-08-14T21:22:30.5988711Z * [new tag] ciflow/inductor/160323 -> ciflow/inductor/160323 2025-08-14T21:22:30.5994159Z * [new tag] ciflow/inductor/160324 -> ciflow/inductor/160324 2025-08-14T21:22:30.5994932Z * [new tag] ciflow/inductor/160325 -> ciflow/inductor/160325 2025-08-14T21:22:30.5995759Z * [new tag] ciflow/inductor/160326 -> ciflow/inductor/160326 2025-08-14T21:22:30.5996572Z * [new tag] ciflow/inductor/160327 -> ciflow/inductor/160327 2025-08-14T21:22:30.5997273Z * [new tag] ciflow/inductor/160328 -> ciflow/inductor/160328 2025-08-14T21:22:30.5998159Z * [new tag] ciflow/inductor/160329 -> ciflow/inductor/160329 2025-08-14T21:22:30.5998788Z * [new tag] ciflow/inductor/160351 -> ciflow/inductor/160351 2025-08-14T21:22:30.5999441Z * [new tag] ciflow/inductor/160353 -> ciflow/inductor/160353 2025-08-14T21:22:30.6000064Z * [new tag] ciflow/inductor/160362 -> ciflow/inductor/160362 2025-08-14T21:22:30.6000690Z * [new tag] ciflow/inductor/160363 -> ciflow/inductor/160363 2025-08-14T21:22:30.6001365Z * [new tag] ciflow/inductor/160364 -> ciflow/inductor/160364 2025-08-14T21:22:30.6002083Z * [new tag] ciflow/inductor/160365 -> ciflow/inductor/160365 2025-08-14T21:22:30.6002727Z * [new tag] ciflow/inductor/160366 -> ciflow/inductor/160366 2025-08-14T21:22:30.6007702Z * [new tag] ciflow/inductor/160367 -> ciflow/inductor/160367 2025-08-14T21:22:30.6007979Z * [new tag] ciflow/inductor/160368 -> ciflow/inductor/160368 2025-08-14T21:22:30.6008266Z * [new tag] ciflow/inductor/160369 -> ciflow/inductor/160369 2025-08-14T21:22:30.6008549Z * [new tag] ciflow/inductor/160371 -> ciflow/inductor/160371 2025-08-14T21:22:30.6008772Z * [new tag] ciflow/inductor/160374 -> ciflow/inductor/160374 2025-08-14T21:22:30.6008914Z * [new tag] ciflow/inductor/160375 -> ciflow/inductor/160375 2025-08-14T21:22:30.6009064Z * [new tag] ciflow/inductor/160377 -> ciflow/inductor/160377 2025-08-14T21:22:30.6009214Z * [new tag] ciflow/inductor/160380 -> ciflow/inductor/160380 2025-08-14T21:22:30.6009360Z * [new tag] ciflow/inductor/160381 -> ciflow/inductor/160381 2025-08-14T21:22:30.6009704Z * [new tag] ciflow/inductor/160383 -> ciflow/inductor/160383 2025-08-14T21:22:30.6012086Z * [new tag] ciflow/inductor/160394 -> ciflow/inductor/160394 2025-08-14T21:22:30.6012287Z * [new tag] ciflow/inductor/160401 -> ciflow/inductor/160401 2025-08-14T21:22:30.6012523Z * [new tag] ciflow/inductor/160402 -> ciflow/inductor/160402 2025-08-14T21:22:30.6012678Z * [new tag] ciflow/inductor/160403 -> ciflow/inductor/160403 2025-08-14T21:22:30.6013544Z * [new tag] ciflow/inductor/160424 -> ciflow/inductor/160424 2025-08-14T21:22:30.6014115Z * [new tag] ciflow/inductor/160426 -> ciflow/inductor/160426 2025-08-14T21:22:30.6014958Z * [new tag] ciflow/inductor/160431 -> ciflow/inductor/160431 2025-08-14T21:22:30.6015609Z * [new tag] ciflow/inductor/160448 -> ciflow/inductor/160448 2025-08-14T21:22:30.6016193Z * [new tag] ciflow/inductor/160450 -> ciflow/inductor/160450 2025-08-14T21:22:30.6016860Z * [new tag] ciflow/inductor/160455 -> ciflow/inductor/160455 2025-08-14T21:22:30.6017750Z * [new tag] ciflow/inductor/160456 -> ciflow/inductor/160456 2025-08-14T21:22:30.6027066Z * [new tag] ciflow/inductor/160461 -> ciflow/inductor/160461 2025-08-14T21:22:30.6027639Z * [new tag] ciflow/inductor/160462 -> ciflow/inductor/160462 2025-08-14T21:22:30.6028285Z * [new tag] ciflow/inductor/160467 -> ciflow/inductor/160467 2025-08-14T21:22:30.6028959Z * [new tag] ciflow/inductor/160470 -> ciflow/inductor/160470 2025-08-14T21:22:30.6029579Z * [new tag] ciflow/inductor/160473 -> ciflow/inductor/160473 2025-08-14T21:22:30.6030216Z * [new tag] ciflow/inductor/160476 -> ciflow/inductor/160476 2025-08-14T21:22:30.6030840Z * [new tag] ciflow/inductor/160480 -> ciflow/inductor/160480 2025-08-14T21:22:30.6031654Z * [new tag] ciflow/inductor/160481 -> ciflow/inductor/160481 2025-08-14T21:22:30.6032327Z * [new tag] ciflow/inductor/160482 -> ciflow/inductor/160482 2025-08-14T21:22:30.6040851Z * [new tag] ciflow/inductor/160483 -> ciflow/inductor/160483 2025-08-14T21:22:30.6041204Z * [new tag] ciflow/inductor/160485 -> ciflow/inductor/160485 2025-08-14T21:22:30.6041501Z * [new tag] ciflow/inductor/160486 -> ciflow/inductor/160486 2025-08-14T21:22:30.6041791Z * [new tag] ciflow/inductor/160503 -> ciflow/inductor/160503 2025-08-14T21:22:30.6042083Z * [new tag] ciflow/inductor/160510 -> ciflow/inductor/160510 2025-08-14T21:22:30.6042375Z * [new tag] ciflow/inductor/160527 -> ciflow/inductor/160527 2025-08-14T21:22:30.6042667Z * [new tag] ciflow/inductor/160530 -> ciflow/inductor/160530 2025-08-14T21:22:30.6042959Z * [new tag] ciflow/inductor/160531 -> ciflow/inductor/160531 2025-08-14T21:22:30.6043264Z * [new tag] ciflow/inductor/160538 -> ciflow/inductor/160538 2025-08-14T21:22:30.6043554Z * [new tag] ciflow/inductor/160539 -> ciflow/inductor/160539 2025-08-14T21:22:30.6043811Z * [new tag] ciflow/inductor/160540 -> ciflow/inductor/160540 2025-08-14T21:22:30.6044107Z * [new tag] ciflow/inductor/160548 -> ciflow/inductor/160548 2025-08-14T21:22:30.6044374Z * [new tag] ciflow/inductor/160561 -> ciflow/inductor/160561 2025-08-14T21:22:30.6044640Z * [new tag] ciflow/inductor/160576 -> ciflow/inductor/160576 2025-08-14T21:22:30.6044915Z * [new tag] ciflow/inductor/160578 -> ciflow/inductor/160578 2025-08-14T21:22:30.6045183Z * [new tag] ciflow/inductor/160580 -> ciflow/inductor/160580 2025-08-14T21:22:30.6045479Z * [new tag] ciflow/inductor/160583 -> ciflow/inductor/160583 2025-08-14T21:22:30.6045752Z * [new tag] ciflow/inductor/160589 -> ciflow/inductor/160589 2025-08-14T21:22:30.6046196Z * [new tag] ciflow/inductor/160590 -> ciflow/inductor/160590 2025-08-14T21:22:30.6051298Z * [new tag] ciflow/inductor/160592 -> ciflow/inductor/160592 2025-08-14T21:22:30.6051464Z * [new tag] ciflow/inductor/160596 -> ciflow/inductor/160596 2025-08-14T21:22:30.6051636Z * [new tag] ciflow/inductor/160601 -> ciflow/inductor/160601 2025-08-14T21:22:30.6051815Z * [new tag] ciflow/inductor/160607 -> ciflow/inductor/160607 2025-08-14T21:22:30.6051974Z * [new tag] ciflow/inductor/160608 -> ciflow/inductor/160608 2025-08-14T21:22:30.6052194Z * [new tag] ciflow/inductor/160611 -> ciflow/inductor/160611 2025-08-14T21:22:30.6052347Z * [new tag] ciflow/inductor/160614 -> ciflow/inductor/160614 2025-08-14T21:22:30.6052520Z * [new tag] ciflow/inductor/160616 -> ciflow/inductor/160616 2025-08-14T21:22:30.6052898Z * [new tag] ciflow/inductor/160619 -> ciflow/inductor/160619 2025-08-14T21:22:30.6053523Z * [new tag] ciflow/inductor/160625 -> ciflow/inductor/160625 2025-08-14T21:22:30.6054155Z * [new tag] ciflow/inductor/160635 -> ciflow/inductor/160635 2025-08-14T21:22:30.6055989Z * [new tag] ciflow/inductor/160649 -> ciflow/inductor/160649 2025-08-14T21:22:30.6056288Z * [new tag] ciflow/inductor/160658 -> ciflow/inductor/160658 2025-08-14T21:22:30.6056562Z * [new tag] ciflow/inductor/160662 -> ciflow/inductor/160662 2025-08-14T21:22:30.6057018Z * [new tag] ciflow/inductor/160668 -> ciflow/inductor/160668 2025-08-14T21:22:30.6057647Z * [new tag] ciflow/inductor/160669 -> ciflow/inductor/160669 2025-08-14T21:22:30.6058253Z * [new tag] ciflow/inductor/160670 -> ciflow/inductor/160670 2025-08-14T21:22:30.6059228Z * [new tag] ciflow/inductor/160671 -> ciflow/inductor/160671 2025-08-14T21:22:30.6059743Z * [new tag] ciflow/inductor/160677 -> ciflow/inductor/160677 2025-08-14T21:22:30.6060442Z * [new tag] ciflow/inductor/160679 -> ciflow/inductor/160679 2025-08-14T21:22:30.6061285Z * [new tag] ciflow/inductor/3b9a386 -> ciflow/inductor/3b9a386 2025-08-14T21:22:30.6068561Z * [new tag] ciflow/inductor/3d4b92b -> ciflow/inductor/3d4b92b 2025-08-14T21:22:30.6069331Z * [new tag] ciflow/inductor/d224ac7 -> ciflow/inductor/d224ac7 2025-08-14T21:22:30.6070088Z * [new tag] ciflow/linux-aarch64/147855 -> ciflow/linux-aarch64/147855 2025-08-14T21:22:30.6070720Z * [new tag] ciflow/linux-aarch64/157994 -> ciflow/linux-aarch64/157994 2025-08-14T21:22:30.6071346Z * [new tag] ciflow/linux-aarch64/159737 -> ciflow/linux-aarch64/159737 2025-08-14T21:22:30.6071939Z * [new tag] ciflow/linux-aarch64/160078 -> ciflow/linux-aarch64/160078 2025-08-14T21:22:30.6072482Z * [new tag] ciflow/linux-aarch64/160299 -> ciflow/linux-aarch64/160299 2025-08-14T21:22:30.6073079Z * [new tag] ciflow/linux-aarch64/160301 -> ciflow/linux-aarch64/160301 2025-08-14T21:22:30.6073779Z * [new tag] ciflow/mps/155923 -> ciflow/mps/155923 2025-08-14T21:22:30.6074361Z * [new tag] ciflow/mps/157553 -> ciflow/mps/157553 2025-08-14T21:22:30.6074965Z * [new tag] ciflow/mps/157635 -> ciflow/mps/157635 2025-08-14T21:22:30.6075544Z * [new tag] ciflow/mps/160541 -> ciflow/mps/160541 2025-08-14T21:22:30.6084302Z * [new tag] ciflow/nightly/156049 -> ciflow/nightly/156049 2025-08-14T21:22:30.6084619Z * [new tag] ciflow/nightly/158104 -> ciflow/nightly/158104 2025-08-14T21:22:30.6084970Z * [new tag] ciflow/op-benchmark/157994 -> ciflow/op-benchmark/157994 2025-08-14T21:22:30.6085555Z * [new tag] ciflow/periodic-rocm-mi300/139971 -> ciflow/periodic-rocm-mi300/139971 2025-08-14T21:22:30.6085976Z * [new tag] ciflow/periodic-rocm-mi300/160073 -> ciflow/periodic-rocm-mi300/160073 2025-08-14T21:22:30.6086399Z * [new tag] ciflow/periodic-rocm-mi300/160538 -> ciflow/periodic-rocm-mi300/160538 2025-08-14T21:22:30.6086623Z * [new tag] ciflow/periodic/054a2fd -> ciflow/periodic/054a2fd 2025-08-14T21:22:30.6086782Z * [new tag] ciflow/periodic/131296 -> ciflow/periodic/131296 2025-08-14T21:22:30.6087049Z * [new tag] ciflow/periodic/139971 -> ciflow/periodic/139971 2025-08-14T21:22:30.6087322Z * [new tag] ciflow/periodic/143959 -> ciflow/periodic/143959 2025-08-14T21:22:30.6087599Z * [new tag] ciflow/periodic/154595 -> ciflow/periodic/154595 2025-08-14T21:22:30.6087788Z * [new tag] ciflow/periodic/156703 -> ciflow/periodic/156703 2025-08-14T21:22:30.6088054Z * [new tag] ciflow/periodic/160201 -> ciflow/periodic/160201 2025-08-14T21:22:30.6088334Z * [new tag] ciflow/periodic/160424 -> ciflow/periodic/160424 2025-08-14T21:22:30.6088603Z * [new tag] ciflow/periodic/160538 -> ciflow/periodic/160538 2025-08-14T21:22:30.6089310Z * [new tag] ciflow/periodic/1febab2a89302464f6c7d69cfbef7a24c421ea65 -> ciflow/periodic/1febab2a89302464f6c7d69cfbef7a24c421ea65 2025-08-14T21:22:30.6089598Z * [new tag] ciflow/periodic/2a6d37d -> ciflow/periodic/2a6d37d 2025-08-14T21:22:30.6090389Z * [new tag] ciflow/periodic/2ee22e435131369a7e4f8cc4732579acc29a941b -> ciflow/periodic/2ee22e435131369a7e4f8cc4732579acc29a941b 2025-08-14T21:22:30.6090709Z * [new tag] ciflow/periodic/317eeb8 -> ciflow/periodic/317eeb8 2025-08-14T21:22:30.6090860Z * [new tag] ciflow/periodic/3c32 -> ciflow/periodic/3c32 2025-08-14T21:22:30.6099012Z * [new tag] ciflow/periodic/3e98831 -> ciflow/periodic/3e98831 2025-08-14T21:22:30.6099765Z * [new tag] ciflow/periodic/4a773e1e867f28a8ff0b15203e5cd9548f74fcee -> ciflow/periodic/4a773e1e867f28a8ff0b15203e5cd9548f74fcee 2025-08-14T21:22:30.6100516Z * [new tag] ciflow/periodic/5f5f508aa836a46dfe88857fb223049616b94e93 -> ciflow/periodic/5f5f508aa836a46dfe88857fb223049616b94e93 2025-08-14T21:22:30.6100860Z * [new tag] ciflow/periodic/94512-point -> ciflow/periodic/94512-point 2025-08-14T21:22:30.6101216Z * [new tag] ciflow/periodic/csl/test87519 -> ciflow/periodic/csl/test87519 2025-08-14T21:22:30.6101548Z * [new tag] ciflow/periodic/csltest88275 -> ciflow/periodic/csltest88275 2025-08-14T21:22:30.6101791Z * [new tag] ciflow/periodic/csltest88761 -> ciflow/periodic/csltest88761 2025-08-14T21:22:30.6102513Z * [new tag] ciflow/periodic/d7114f05b10de8e6de81ffc567d63944c3117d51 -> ciflow/periodic/d7114f05b10de8e6de81ffc567d63944c3117d51 2025-08-14T21:22:30.6102978Z * [new tag] ciflow/periodic/release_1.12 -> ciflow/periodic/release_1.12 2025-08-14T21:22:30.6103853Z * [new tag] ciflow/periodic/release_1.12.0 -> ciflow/periodic/release_1.12.0 2025-08-14T21:22:30.6104820Z * [new tag] ciflow/periodic/sha-ec5b83 -> ciflow/periodic/sha-ec5b83 2025-08-14T21:22:30.6109214Z * [new tag] ciflow/rocm-mi300/151360 -> ciflow/rocm-mi300/151360 2025-08-14T21:22:30.6109511Z * [new tag] ciflow/rocm-mi300/159158 -> ciflow/rocm-mi300/159158 2025-08-14T21:22:30.6109775Z * [new tag] ciflow/rocm-mi300/160073 -> ciflow/rocm-mi300/160073 2025-08-14T21:22:30.6110057Z * [new tag] ciflow/rocm-mi300/160468 -> ciflow/rocm-mi300/160468 2025-08-14T21:22:30.6110431Z * [new tag] ciflow/rocm-mi300/160538 -> ciflow/rocm-mi300/160538 2025-08-14T21:22:30.6110708Z * [new tag] ciflow/rocm-mi355/160215 -> ciflow/rocm-mi355/160215 2025-08-14T21:22:30.6110960Z * [new tag] ciflow/rocm/148492 -> ciflow/rocm/148492 2025-08-14T21:22:30.6111213Z * [new tag] ciflow/rocm/151360 -> ciflow/rocm/151360 2025-08-14T21:22:30.6111466Z * [new tag] ciflow/rocm/151845 -> ciflow/rocm/151845 2025-08-14T21:22:30.6111748Z * [new tag] ciflow/rocm/154864 -> ciflow/rocm/154864 2025-08-14T21:22:30.6112396Z * [new tag] ciflow/rocm/156491 -> ciflow/rocm/156491 2025-08-14T21:22:30.6113002Z * [new tag] ciflow/rocm/158219 -> ciflow/rocm/158219 2025-08-14T21:22:30.6113610Z * [new tag] ciflow/rocm/158220 -> ciflow/rocm/158220 2025-08-14T21:22:30.6114186Z * [new tag] ciflow/rocm/158224 -> ciflow/rocm/158224 2025-08-14T21:22:30.6114777Z * [new tag] ciflow/rocm/159158 -> ciflow/rocm/159158 2025-08-14T21:22:30.6115352Z * [new tag] ciflow/rocm/160215 -> ciflow/rocm/160215 2025-08-14T21:22:30.6115957Z * [new tag] ciflow/rocm/160468 -> ciflow/rocm/160468 2025-08-14T21:22:30.6116800Z * [new tag] ciflow/rocm/160538 -> ciflow/rocm/160538 2025-08-14T21:22:30.6117528Z * [new tag] ciflow/s390/143959 -> ciflow/s390/143959 2025-08-14T21:22:30.6118492Z * [new tag] ciflow/slow/01c7106 -> ciflow/slow/01c7106 2025-08-14T21:22:30.6125599Z * [new tag] ciflow/slow/0577043 -> ciflow/slow/0577043 2025-08-14T21:22:30.6126352Z * [new tag] ciflow/slow/0d5b74da0cab798fbfdb9caa53fad816999c8386-sdym -> ciflow/slow/0d5b74da0cab798fbfdb9caa53fad816999c8386-sdym 2025-08-14T21:22:30.6126771Z * [new tag] ciflow/slow/0e81104 -> ciflow/slow/0e81104 2025-08-14T21:22:30.6127028Z * [new tag] ciflow/slow/154595 -> ciflow/slow/154595 2025-08-14T21:22:30.6127231Z * [new tag] ciflow/slow/1732077 -> ciflow/slow/1732077 2025-08-14T21:22:30.6127490Z * [new tag] ciflow/slow/187eb7c -> ciflow/slow/187eb7c 2025-08-14T21:22:30.6128191Z * [new tag] ciflow/slow/1faef89 -> ciflow/slow/1faef89 2025-08-14T21:22:30.6129121Z * [new tag] ciflow/slow/3920ec1 -> ciflow/slow/3920ec1 2025-08-14T21:22:30.6129965Z * [new tag] ciflow/slow/3b7c6b2 -> ciflow/slow/3b7c6b2 2025-08-14T21:22:30.6130736Z * [new tag] ciflow/slow/59a3759 -> ciflow/slow/59a3759 2025-08-14T21:22:30.6131483Z * [new tag] ciflow/slow/70ef0bb -> ciflow/slow/70ef0bb 2025-08-14T21:22:30.6132274Z * [new tag] ciflow/slow/788ff06 -> ciflow/slow/788ff06 2025-08-14T21:22:30.6133567Z * [new tag] ciflow/slow/8751002215790a3a88750faa8f4366933e296693-sdym -> ciflow/slow/8751002215790a3a88750faa8f4366933e296693-sdym 2025-08-14T21:22:30.6134001Z * [new tag] ciflow/slow/9d85864 -> ciflow/slow/9d85864 2025-08-14T21:22:30.6140378Z * [new tag] ciflow/slow/9ffad5b -> ciflow/slow/9ffad5b 2025-08-14T21:22:30.6140636Z * [new tag] ciflow/slow/a206e8b -> ciflow/slow/a206e8b 2025-08-14T21:22:30.6140884Z * [new tag] ciflow/slow/a837609 -> ciflow/slow/a837609 2025-08-14T21:22:30.6141122Z * [new tag] ciflow/slow/af841f3 -> ciflow/slow/af841f3 2025-08-14T21:22:30.6141830Z * [new tag] ciflow/slow/da3aba1e46157c4df504b067477cdf2b3c96b194-sdym -> ciflow/slow/da3aba1e46157c4df504b067477cdf2b3c96b194-sdym 2025-08-14T21:22:30.6142087Z * [new tag] ciflow/trunk/131296 -> ciflow/trunk/131296 2025-08-14T21:22:30.6142411Z * [new tag] ciflow/trunk/137400 -> ciflow/trunk/137400 2025-08-14T21:22:30.6142660Z * [new tag] ciflow/trunk/138996 -> ciflow/trunk/138996 2025-08-14T21:22:30.6142898Z * [new tag] ciflow/trunk/139971 -> ciflow/trunk/139971 2025-08-14T21:22:30.6143153Z * [new tag] ciflow/trunk/147360 -> ciflow/trunk/147360 2025-08-14T21:22:30.6143393Z * [new tag] ciflow/trunk/147855 -> ciflow/trunk/147855 2025-08-14T21:22:30.6143649Z * [new tag] ciflow/trunk/148180 -> ciflow/trunk/148180 2025-08-14T21:22:30.6143899Z * [new tag] ciflow/trunk/148328 -> ciflow/trunk/148328 2025-08-14T21:22:30.6144142Z * [new tag] ciflow/trunk/148492 -> ciflow/trunk/148492 2025-08-14T21:22:30.6144408Z * [new tag] ciflow/trunk/150282 -> ciflow/trunk/150282 2025-08-14T21:22:30.6144882Z * [new tag] ciflow/trunk/150302 -> ciflow/trunk/150302 2025-08-14T21:22:30.6145684Z * [new tag] ciflow/trunk/151845 -> ciflow/trunk/151845 2025-08-14T21:22:30.6146467Z * [new tag] ciflow/trunk/152624 -> ciflow/trunk/152624 2025-08-14T21:22:30.6147197Z * [new tag] ciflow/trunk/154193 -> ciflow/trunk/154193 2025-08-14T21:22:30.6147854Z * [new tag] ciflow/trunk/154595 -> ciflow/trunk/154595 2025-08-14T21:22:30.6148393Z * [new tag] ciflow/trunk/154650 -> ciflow/trunk/154650 2025-08-14T21:22:30.6153823Z * [new tag] ciflow/trunk/154694 -> ciflow/trunk/154694 2025-08-14T21:22:30.6154451Z * [new tag] ciflow/trunk/155958 -> ciflow/trunk/155958 2025-08-14T21:22:30.6155074Z * [new tag] ciflow/trunk/156049 -> ciflow/trunk/156049 2025-08-14T21:22:30.6155877Z * [new tag] ciflow/trunk/156703 -> ciflow/trunk/156703 2025-08-14T21:22:30.6156731Z * [new tag] ciflow/trunk/156851 -> ciflow/trunk/156851 2025-08-14T21:22:30.6157462Z * [new tag] ciflow/trunk/157148 -> ciflow/trunk/157148 2025-08-14T21:22:30.6158090Z * [new tag] ciflow/trunk/157152 -> ciflow/trunk/157152 2025-08-14T21:22:30.6158729Z * [new tag] ciflow/trunk/157432 -> ciflow/trunk/157432 2025-08-14T21:22:30.6159342Z * [new tag] ciflow/trunk/157685 -> ciflow/trunk/157685 2025-08-14T21:22:30.6159933Z * [new tag] ciflow/trunk/157689 -> ciflow/trunk/157689 2025-08-14T21:22:30.6160542Z * [new tag] ciflow/trunk/157699 -> ciflow/trunk/157699 2025-08-14T21:22:30.6161231Z * [new tag] ciflow/trunk/157813 -> ciflow/trunk/157813 2025-08-14T21:22:30.6161959Z * [new tag] ciflow/trunk/157994 -> ciflow/trunk/157994 2025-08-14T21:22:30.6162568Z * [new tag] ciflow/trunk/158091 -> ciflow/trunk/158091 2025-08-14T21:22:30.6171297Z * [new tag] ciflow/trunk/158104 -> ciflow/trunk/158104 2025-08-14T21:22:30.6171567Z * [new tag] ciflow/trunk/158219 -> ciflow/trunk/158219 2025-08-14T21:22:30.6171855Z * [new tag] ciflow/trunk/158220 -> ciflow/trunk/158220 2025-08-14T21:22:30.6172129Z * [new tag] ciflow/trunk/158224 -> ciflow/trunk/158224 2025-08-14T21:22:30.6172394Z * [new tag] ciflow/trunk/158529 -> ciflow/trunk/158529 2025-08-14T21:22:30.6172675Z * [new tag] ciflow/trunk/158647 -> ciflow/trunk/158647 2025-08-14T21:22:30.6172943Z * [new tag] ciflow/trunk/158810 -> ciflow/trunk/158810 2025-08-14T21:22:30.6173205Z * [new tag] ciflow/trunk/158812 -> ciflow/trunk/158812 2025-08-14T21:22:30.6173474Z * [new tag] ciflow/trunk/158863 -> ciflow/trunk/158863 2025-08-14T21:22:30.6173850Z * [new tag] ciflow/trunk/158864 -> ciflow/trunk/158864 2025-08-14T21:22:30.6174111Z * [new tag] ciflow/trunk/158883 -> ciflow/trunk/158883 2025-08-14T21:22:30.6174355Z * [new tag] ciflow/trunk/158914 -> ciflow/trunk/158914 2025-08-14T21:22:30.6174603Z * [new tag] ciflow/trunk/158965 -> ciflow/trunk/158965 2025-08-14T21:22:30.6174859Z * [new tag] ciflow/trunk/158987 -> ciflow/trunk/158987 2025-08-14T21:22:30.6175107Z * [new tag] ciflow/trunk/159033 -> ciflow/trunk/159033 2025-08-14T21:22:30.6175371Z * [new tag] ciflow/trunk/159140 -> ciflow/trunk/159140 2025-08-14T21:22:30.6175618Z * [new tag] ciflow/trunk/159158 -> ciflow/trunk/159158 2025-08-14T21:22:30.6175836Z * [new tag] ciflow/trunk/159553 -> ciflow/trunk/159553 2025-08-14T21:22:30.6176103Z * [new tag] ciflow/trunk/159562 -> ciflow/trunk/159562 2025-08-14T21:22:30.6176357Z * [new tag] ciflow/trunk/159682 -> ciflow/trunk/159682 2025-08-14T21:22:30.6176611Z * [new tag] ciflow/trunk/159691 -> ciflow/trunk/159691 2025-08-14T21:22:30.6177193Z * [new tag] ciflow/trunk/159842 -> ciflow/trunk/159842 2025-08-14T21:22:30.6186311Z * [new tag] ciflow/trunk/159889 -> ciflow/trunk/159889 2025-08-14T21:22:30.6186991Z * [new tag] ciflow/trunk/159923 -> ciflow/trunk/159923 2025-08-14T21:22:30.6187575Z * [new tag] ciflow/trunk/160004 -> ciflow/trunk/160004 2025-08-14T21:22:30.6188195Z * [new tag] ciflow/trunk/160113 -> ciflow/trunk/160113 2025-08-14T21:22:30.6188816Z * [new tag] ciflow/trunk/160161 -> ciflow/trunk/160161 2025-08-14T21:22:30.6189539Z * [new tag] ciflow/trunk/160168 -> ciflow/trunk/160168 2025-08-14T21:22:30.6190099Z * [new tag] ciflow/trunk/160181 -> ciflow/trunk/160181 2025-08-14T21:22:30.6190738Z * [new tag] ciflow/trunk/160183 -> ciflow/trunk/160183 2025-08-14T21:22:30.6191350Z * [new tag] ciflow/trunk/160190 -> ciflow/trunk/160190 2025-08-14T21:22:30.6196262Z * [new tag] ciflow/trunk/160198 -> ciflow/trunk/160198 2025-08-14T21:22:30.6196515Z * [new tag] ciflow/trunk/160205 -> ciflow/trunk/160205 2025-08-14T21:22:30.6196777Z * [new tag] ciflow/trunk/160219 -> ciflow/trunk/160219 2025-08-14T21:22:30.6197020Z * [new tag] ciflow/trunk/160224 -> ciflow/trunk/160224 2025-08-14T21:22:30.6197273Z * [new tag] ciflow/trunk/160250 -> ciflow/trunk/160250 2025-08-14T21:22:30.6197757Z * [new tag] ciflow/trunk/160253 -> ciflow/trunk/160253 2025-08-14T21:22:30.6198527Z * [new tag] ciflow/trunk/160335 -> ciflow/trunk/160335 2025-08-14T21:22:30.6199269Z * [new tag] ciflow/trunk/160338 -> ciflow/trunk/160338 2025-08-14T21:22:30.6199878Z * [new tag] ciflow/trunk/160383 -> ciflow/trunk/160383 2025-08-14T21:22:30.6200537Z * [new tag] ciflow/trunk/160401 -> ciflow/trunk/160401 2025-08-14T21:22:30.6201228Z * [new tag] ciflow/trunk/160403 -> ciflow/trunk/160403 2025-08-14T21:22:30.6201909Z * [new tag] ciflow/trunk/160430 -> ciflow/trunk/160430 2025-08-14T21:22:30.6202535Z * [new tag] ciflow/trunk/160431 -> ciflow/trunk/160431 2025-08-14T21:22:30.6203314Z * [new tag] ciflow/trunk/160439 -> ciflow/trunk/160439 2025-08-14T21:22:30.6203969Z * [new tag] ciflow/trunk/160449 -> ciflow/trunk/160449 2025-08-14T21:22:30.6204814Z * [new tag] ciflow/trunk/160454 -> ciflow/trunk/160454 2025-08-14T21:22:30.6205426Z * [new tag] ciflow/trunk/160468 -> ciflow/trunk/160468 2025-08-14T21:22:30.6206084Z * [new tag] ciflow/trunk/160481 -> ciflow/trunk/160481 2025-08-14T21:22:30.6215200Z * [new tag] ciflow/trunk/160485 -> ciflow/trunk/160485 2025-08-14T21:22:30.6215492Z * [new tag] ciflow/trunk/160519 -> ciflow/trunk/160519 2025-08-14T21:22:30.6215761Z * [new tag] ciflow/trunk/160527 -> ciflow/trunk/160527 2025-08-14T21:22:30.6216010Z * [new tag] ciflow/trunk/160560 -> ciflow/trunk/160560 2025-08-14T21:22:30.6216184Z * [new tag] ciflow/trunk/160578 -> ciflow/trunk/160578 2025-08-14T21:22:30.6216352Z * [new tag] ciflow/trunk/160589 -> ciflow/trunk/160589 2025-08-14T21:22:30.6216529Z * [new tag] ciflow/trunk/160592 -> ciflow/trunk/160592 2025-08-14T21:22:30.6216697Z * [new tag] ciflow/trunk/160649 -> ciflow/trunk/160649 2025-08-14T21:22:30.6216858Z * [new tag] ciflow/trunk/160656 -> ciflow/trunk/160656 2025-08-14T21:22:30.6217036Z * [new tag] ciflow/unstable/123 -> ciflow/unstable/123 2025-08-14T21:22:30.6217198Z * [new tag] ciflow/vllm/160116 -> ciflow/vllm/160116 2025-08-14T21:22:30.6217369Z * [new tag] ciflow/vllm/160583 -> ciflow/vllm/160583 2025-08-14T21:22:30.6217538Z * [new tag] ciflow/vllm/160619 -> ciflow/vllm/160619 2025-08-14T21:22:30.6217699Z * [new tag] ciflow/vllm/160625 -> ciflow/vllm/160625 2025-08-14T21:22:30.6217872Z * [new tag] ciflow/vllm/160627 -> ciflow/vllm/160627 2025-08-14T21:22:30.6218086Z * [new tag] ciflow/win-arm64/156049 -> ciflow/win-arm64/156049 2025-08-14T21:22:30.6218250Z * [new tag] ciflow/win-arm64/158104 -> ciflow/win-arm64/158104 2025-08-14T21:22:30.6218703Z * [new tag] ciflow/win-arm64/159553 -> ciflow/win-arm64/159553 2025-08-14T21:22:30.6219272Z * [new tag] ciflow/win-arm64/159562 -> ciflow/win-arm64/159562 2025-08-14T21:22:30.6220106Z * [new tag] ciflow/win-arm64/159777 -> ciflow/win-arm64/159777 2025-08-14T21:22:30.6225197Z * [new tag] ciflow/win-arm64/159780 -> ciflow/win-arm64/159780 2025-08-14T21:22:30.6225846Z * [new tag] ciflow/win-arm64/159842 -> ciflow/win-arm64/159842 2025-08-14T21:22:30.6226417Z * [new tag] ciflow/win-arm64/160250 -> ciflow/win-arm64/160250 2025-08-14T21:22:30.6227031Z * [new tag] ciflow/win-arm64/160253 -> ciflow/win-arm64/160253 2025-08-14T21:22:30.6227666Z * [new tag] ciflow/win-arm64/160454 -> ciflow/win-arm64/160454 2025-08-14T21:22:30.6228260Z * [new tag] ciflow/win-arm64/160560 -> ciflow/win-arm64/160560 2025-08-14T21:22:30.6229869Z * [new tag] ciflow/xpu/138996 -> ciflow/xpu/138996 2025-08-14T21:22:30.6230029Z * [new tag] ciflow/xpu/139971 -> ciflow/xpu/139971 2025-08-14T21:22:30.6230528Z * [new tag] ciflow/xpu/140972 -> ciflow/xpu/140972 2025-08-14T21:22:30.6231207Z * [new tag] ciflow/xpu/143553 -> ciflow/xpu/143553 2025-08-14T21:22:30.6231910Z * [new tag] ciflow/xpu/156272 -> ciflow/xpu/156272 2025-08-14T21:22:30.6232479Z * [new tag] ciflow/xpu/156812 -> ciflow/xpu/156812 2025-08-14T21:22:30.6233079Z * [new tag] ciflow/xpu/157699 -> ciflow/xpu/157699 2025-08-14T21:22:30.6233687Z * [new tag] ciflow/xpu/157994 -> ciflow/xpu/157994 2025-08-14T21:22:30.6234248Z * [new tag] ciflow/xpu/158336 -> ciflow/xpu/158336 2025-08-14T21:22:30.6234848Z * [new tag] ciflow/xpu/158733 -> ciflow/xpu/158733 2025-08-14T21:22:30.6239592Z * [new tag] ciflow/xpu/159033 -> ciflow/xpu/159033 2025-08-14T21:22:30.6239736Z * [new tag] ciflow/xpu/159118 -> ciflow/xpu/159118 2025-08-14T21:22:30.6239899Z * [new tag] ciflow/xpu/159140 -> ciflow/xpu/159140 2025-08-14T21:22:30.6240109Z * [new tag] ciflow/xpu/159241 -> ciflow/xpu/159241 2025-08-14T21:22:30.6240247Z * [new tag] ciflow/xpu/159473 -> ciflow/xpu/159473 2025-08-14T21:22:30.6240378Z * [new tag] ciflow/xpu/159474 -> ciflow/xpu/159474 2025-08-14T21:22:30.6240518Z * [new tag] ciflow/xpu/159553 -> ciflow/xpu/159553 2025-08-14T21:22:30.6241257Z * [new tag] ciflow/xpu/159944 -> ciflow/xpu/159944 2025-08-14T21:22:30.6243864Z * [new tag] ciflow/xpu/160062 -> ciflow/xpu/160062 2025-08-14T21:22:30.6244042Z * [new tag] ciflow/xpu/160067 -> ciflow/xpu/160067 2025-08-14T21:22:30.6244203Z * [new tag] ciflow/xpu/160158 -> ciflow/xpu/160158 2025-08-14T21:22:30.6244363Z * [new tag] ciflow/xpu/160173 -> ciflow/xpu/160173 2025-08-14T21:22:30.6244783Z * [new tag] ciflow/xpu/160183 -> ciflow/xpu/160183 2025-08-14T21:22:30.6245408Z * [new tag] ciflow/xpu/160301 -> ciflow/xpu/160301 2025-08-14T21:22:30.6246058Z * [new tag] ciflow/xpu/160403 -> ciflow/xpu/160403 2025-08-14T21:22:30.6246782Z * [new tag] ciflow/xpu/160606 -> ciflow/xpu/160606 2025-08-14T21:22:30.6247487Z * [new tag] cslpull75 -> cslpull75 2025-08-14T21:22:30.6248316Z * [new tag] cslpull76 -> cslpull76 2025-08-14T21:22:30.6249224Z * [new tag] cslpull77 -> cslpull77 2025-08-14T21:22:30.6258367Z * [new tag] cslpull78 -> cslpull78 2025-08-14T21:22:30.6258516Z * [new tag] cslpull79 -> cslpull79 2025-08-14T21:22:30.6258655Z * [new tag] cslpull80 -> cslpull80 2025-08-14T21:22:30.6258801Z * [new tag] cslpull81 -> cslpull81 2025-08-14T21:22:30.6258915Z * [new tag] cslpull82 -> cslpull82 2025-08-14T21:22:30.6259034Z * [new tag] cslpull83 -> cslpull83 2025-08-14T21:22:30.6259358Z * [new tag] cslpull84 -> cslpull84 2025-08-14T21:22:30.6260144Z * [new tag] cslpull85 -> cslpull85 2025-08-14T21:22:30.6260848Z * [new tag] cslpull86 -> cslpull86 2025-08-14T21:22:30.6261661Z * [new tag] cslpull87 -> cslpull87 2025-08-14T21:22:30.6262488Z * [new tag] cslpull88 -> cslpull88 2025-08-14T21:22:30.6263137Z * [new tag] cslpull89 -> cslpull89 2025-08-14T21:22:30.6263707Z * [new tag] cslpull90 -> cslpull90 2025-08-14T21:22:30.6271018Z * [new tag] cslpull91 -> cslpull91 2025-08-14T21:22:30.6271155Z * [new tag] cslpull92 -> cslpull92 2025-08-14T21:22:30.6271278Z * [new tag] flight_5 -> flight_5 2025-08-14T21:22:30.6271410Z * [new tag] flight_5.1 -> flight_5.1 2025-08-14T21:22:30.6271599Z * [new tag] flight_5.2 -> flight_5.2 2025-08-14T21:22:30.6271756Z * [new tag] flight_5.3 -> flight_5.3 2025-08-14T21:22:30.6271889Z * [new tag] forpull1 -> forpull1 2025-08-14T21:22:30.6272159Z * [new tag] malfet/tag-2ef5611 -> malfet/tag-2ef5611 2025-08-14T21:22:30.6272379Z * [new tag] malfet/tag-317b1a0 -> malfet/tag-317b1a0 2025-08-14T21:22:30.6272575Z * [new tag] malfet/tag-ec6f767 -> malfet/tag-ec6f767 2025-08-14T21:22:30.6272721Z * [new tag] nightly-binary -> nightly-binary 2025-08-14T21:22:30.6273324Z * [new tag] sqzhang_flight4_plus -> sqzhang_flight4_plus 2025-08-14T21:22:30.6273895Z * [new tag] sqzhang_flight_3 -> sqzhang_flight_3 2025-08-14T21:22:30.6275008Z * [new tag] trunk/01584d2a7d029c9749eb73678cf1dc313cc35df6 -> trunk/01584d2a7d029c9749eb73678cf1dc313cc35df6 2025-08-14T21:22:30.6275744Z * [new tag] trunk/017259f9c65b6fad55fb9597d7077e2543eaae46 -> trunk/017259f9c65b6fad55fb9597d7077e2543eaae46 2025-08-14T21:22:30.6276687Z * [new tag] trunk/01bcf9a40dea937637d2cdd530bed2652510943d -> trunk/01bcf9a40dea937637d2cdd530bed2652510943d 2025-08-14T21:22:30.6277497Z * [new tag] trunk/01f66d08d93365015f4af005a252f439c4d4013a -> trunk/01f66d08d93365015f4af005a252f439c4d4013a 2025-08-14T21:22:30.6278206Z * [new tag] trunk/03b254e49f2d4c092e6ca712e5702cf2895aa47e -> trunk/03b254e49f2d4c092e6ca712e5702cf2895aa47e 2025-08-14T21:22:30.6278979Z * [new tag] trunk/05029ad1c30865d3f7e7fd13384db9d826e563eb -> trunk/05029ad1c30865d3f7e7fd13384db9d826e563eb 2025-08-14T21:22:30.6284047Z * [new tag] trunk/05c19d1acecc01b0d2512364183058a6885b9869 -> trunk/05c19d1acecc01b0d2512364183058a6885b9869 2025-08-14T21:22:30.6284808Z * [new tag] trunk/05c417715f791875fbf28cfc3fc86142de1a3206 -> trunk/05c417715f791875fbf28cfc3fc86142de1a3206 2025-08-14T21:22:30.6285646Z * [new tag] trunk/06824f3c7268bb807a422b663047cd0900ddd126 -> trunk/06824f3c7268bb807a422b663047cd0900ddd126 2025-08-14T21:22:30.6286265Z * [new tag] trunk/077cb389746a7d61cfc018aad2ba29a8aa195610 -> trunk/077cb389746a7d61cfc018aad2ba29a8aa195610 2025-08-14T21:22:30.6287054Z * [new tag] trunk/089c4a1ba007ed4abb3e5e0eafd97b7584566057 -> trunk/089c4a1ba007ed4abb3e5e0eafd97b7584566057 2025-08-14T21:22:30.6288014Z * [new tag] trunk/09381f5dacda7bbbfa361f5df76bde5cd309adc1 -> trunk/09381f5dacda7bbbfa361f5df76bde5cd309adc1 2025-08-14T21:22:30.6288742Z * [new tag] trunk/0bd3af4fb87445f4de3a1f9b823e399c8b3cefde -> trunk/0bd3af4fb87445f4de3a1f9b823e399c8b3cefde 2025-08-14T21:22:30.6289488Z * [new tag] trunk/0d3461bac0fb5177e35152d980b301ea3a0aa2c4 -> trunk/0d3461bac0fb5177e35152d980b301ea3a0aa2c4 2025-08-14T21:22:30.6290229Z * [new tag] trunk/0d40ff3b496e68193bc16d5391fa2e3623709f81 -> trunk/0d40ff3b496e68193bc16d5391fa2e3623709f81 2025-08-14T21:22:30.6291016Z * [new tag] trunk/0d71ca2c46753bb268bfdcf815c14415c122a289 -> trunk/0d71ca2c46753bb268bfdcf815c14415c122a289 2025-08-14T21:22:30.6291764Z * [new tag] trunk/0d88593dd826544c9e7bd4aa615ef86847a78d2b -> trunk/0d88593dd826544c9e7bd4aa615ef86847a78d2b 2025-08-14T21:22:30.6292527Z * [new tag] trunk/0e3e377bd5126cfcc69d70c4d77b352d3404cc11 -> trunk/0e3e377bd5126cfcc69d70c4d77b352d3404cc11 2025-08-14T21:22:30.6293331Z * [new tag] trunk/0f3b10b8eebe68e3c75d473d499b87dfe14a2eca -> trunk/0f3b10b8eebe68e3c75d473d499b87dfe14a2eca 2025-08-14T21:22:30.6298202Z * [new tag] trunk/101276f81b4d2a8c31bfd6796b986d4c1bfdf483 -> trunk/101276f81b4d2a8c31bfd6796b986d4c1bfdf483 2025-08-14T21:22:30.6298501Z * [new tag] trunk/1028c5e2d50e121865bf98307e7c035f549a24b2 -> trunk/1028c5e2d50e121865bf98307e7c035f549a24b2 2025-08-14T21:22:30.6298918Z * [new tag] trunk/10bc36fe840cb3510fab84d2ea22663b76702f1e -> trunk/10bc36fe840cb3510fab84d2ea22663b76702f1e 2025-08-14T21:22:30.6299332Z * [new tag] trunk/10e3514c962b58cbbee994257872a626ff76d51b -> trunk/10e3514c962b58cbbee994257872a626ff76d51b 2025-08-14T21:22:30.6299649Z * [new tag] trunk/1128f4c2a822cbe34a9d966306af15097179ffe1 -> trunk/1128f4c2a822cbe34a9d966306af15097179ffe1 2025-08-14T21:22:30.6299938Z * [new tag] trunk/114a6c40434bfb9cfa5abc30e9e34d81300d743e -> trunk/114a6c40434bfb9cfa5abc30e9e34d81300d743e 2025-08-14T21:22:30.6300376Z * [new tag] trunk/118bc97b14c24ac88a4b0c0750a9e7bf93154c76 -> trunk/118bc97b14c24ac88a4b0c0750a9e7bf93154c76 2025-08-14T21:22:30.6300703Z * [new tag] trunk/1196bb1c2e4d5a7edc09f2260e3034132f0c6c91 -> trunk/1196bb1c2e4d5a7edc09f2260e3034132f0c6c91 2025-08-14T21:22:30.6301123Z * [new tag] trunk/11a3565f1872bbad9c253a127e8d4ce7a1b40ec8 -> trunk/11a3565f1872bbad9c253a127e8d4ce7a1b40ec8 2025-08-14T21:22:30.6301442Z * [new tag] trunk/15e49f61643e4c0eef420f0981609709ef55b848 -> trunk/15e49f61643e4c0eef420f0981609709ef55b848 2025-08-14T21:22:30.6301971Z * [new tag] trunk/16d15445f8bd8740095b23de4af89d757af793ca -> trunk/16d15445f8bd8740095b23de4af89d757af793ca 2025-08-14T21:22:30.6302706Z * [new tag] trunk/178515d0ff6833c8e9221482b2a650ab31e00019 -> trunk/178515d0ff6833c8e9221482b2a650ab31e00019 2025-08-14T21:22:30.6303472Z * [new tag] trunk/182efe31dbe43376e7eef7338356aaf94d5bcabe -> trunk/182efe31dbe43376e7eef7338356aaf94d5bcabe 2025-08-14T21:22:30.6304188Z * [new tag] trunk/194fcfcfbdad0add1a1b695321e31a576058f4cf -> trunk/194fcfcfbdad0add1a1b695321e31a576058f4cf 2025-08-14T21:22:30.6304912Z * [new tag] trunk/195b5c2e27eb8f21cbc8ad1e90f42db5a8cfccca -> trunk/195b5c2e27eb8f21cbc8ad1e90f42db5a8cfccca 2025-08-14T21:22:30.6305703Z * [new tag] trunk/198b5fd2d47fa3d5110ceba6827a3b18e0064014 -> trunk/198b5fd2d47fa3d5110ceba6827a3b18e0064014 2025-08-14T21:22:30.6306475Z * [new tag] trunk/199e9abb6a366bbd27c39d1da7c3123b4eea9b5a -> trunk/199e9abb6a366bbd27c39d1da7c3123b4eea9b5a 2025-08-14T21:22:30.6307123Z * [new tag] trunk/19b4283884b2d9b3a0eb364da10b1540d14ab7a7 -> trunk/19b4283884b2d9b3a0eb364da10b1540d14ab7a7 2025-08-14T21:22:30.6308306Z * [new tag] trunk/1c2587119152cec3905647a47c65d3d26619c5a8 -> trunk/1c2587119152cec3905647a47c65d3d26619c5a8 2025-08-14T21:22:30.6313159Z * [new tag] trunk/1c26c53851c212a7c90a325549a72f0571613a8c -> trunk/1c26c53851c212a7c90a325549a72f0571613a8c 2025-08-14T21:22:30.6313955Z * [new tag] trunk/1c2cba17eab2b09d87142883da2bdbdbcf018613 -> trunk/1c2cba17eab2b09d87142883da2bdbdbcf018613 2025-08-14T21:22:30.6314740Z * [new tag] trunk/1d80d697a269234b47ec7ede192faf3bb9b159e3 -> trunk/1d80d697a269234b47ec7ede192faf3bb9b159e3 2025-08-14T21:22:30.6315481Z * [new tag] trunk/1ea688f9a2602fbcde32c0302b822526ca4219dc -> trunk/1ea688f9a2602fbcde32c0302b822526ca4219dc 2025-08-14T21:22:30.6316349Z * [new tag] trunk/1f4057c11ac941fb324386ca594d0a6882185aad -> trunk/1f4057c11ac941fb324386ca594d0a6882185aad 2025-08-14T21:22:30.6317151Z * [new tag] trunk/1fc683cf17c8c673044538d10266c00f92987be2 -> trunk/1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:22:30.6317786Z * [new tag] trunk/1febab2a89302464f6c7d69cfbef7a24c421ea65 -> trunk/1febab2a89302464f6c7d69cfbef7a24c421ea65 2025-08-14T21:22:30.6318493Z * [new tag] trunk/206c1eef6571f906c2792d899a09136b3fce9673 -> trunk/206c1eef6571f906c2792d899a09136b3fce9673 2025-08-14T21:22:30.6319303Z * [new tag] trunk/20bdabbb3c5d6b118a94b2e045c777662563d5bb -> trunk/20bdabbb3c5d6b118a94b2e045c777662563d5bb 2025-08-14T21:22:30.6319957Z * [new tag] trunk/21392c0e06ac2b2621950455975ca6332f0bf641 -> trunk/21392c0e06ac2b2621950455975ca6332f0bf641 2025-08-14T21:22:30.6320593Z * [new tag] trunk/2247aa6d1d43e256255f5c74a781c3190a4387b6 -> trunk/2247aa6d1d43e256255f5c74a781c3190a4387b6 2025-08-14T21:22:30.6321346Z * [new tag] trunk/2259dbed4e0d3f2a8174b5847fd0741aed42451d -> trunk/2259dbed4e0d3f2a8174b5847fd0741aed42451d 2025-08-14T21:22:30.6322113Z * [new tag] trunk/231c72240d80091f099c95e326d3600cba866eee -> trunk/231c72240d80091f099c95e326d3600cba866eee 2025-08-14T21:22:30.6331020Z * [new tag] trunk/24257f5bfaa37795f74d9f64c1b43584128d4b8c -> trunk/24257f5bfaa37795f74d9f64c1b43584128d4b8c 2025-08-14T21:22:30.6331314Z * [new tag] trunk/24f43d0da7ad9c6e95a09a2fee610387728cc1cd -> trunk/24f43d0da7ad9c6e95a09a2fee610387728cc1cd 2025-08-14T21:22:30.6331716Z * [new tag] trunk/2898d3f965e5cd9d02fc2ecdab7c580fd457fea9 -> trunk/2898d3f965e5cd9d02fc2ecdab7c580fd457fea9 2025-08-14T21:22:30.6332008Z * [new tag] trunk/28ccc9e7247798980fe00a11bcd64a8016b5f227 -> trunk/28ccc9e7247798980fe00a11bcd64a8016b5f227 2025-08-14T21:22:30.6332404Z * [new tag] trunk/29712314dd5cf500a8ea3d1c69483a3cb768ca72 -> trunk/29712314dd5cf500a8ea3d1c69483a3cb768ca72 2025-08-14T21:22:30.6332694Z * [new tag] trunk/29d20d49f0b7f4e362e1cefdcdc4b5659969312c -> trunk/29d20d49f0b7f4e362e1cefdcdc4b5659969312c 2025-08-14T21:22:30.6333072Z * [new tag] trunk/2c5e10a5fceb208b11c3d569ae02e348b5893b31 -> trunk/2c5e10a5fceb208b11c3d569ae02e348b5893b31 2025-08-14T21:22:30.6333402Z * [new tag] trunk/2d0cdee394bccadcd0abe19dd4623ed978a331ad -> trunk/2d0cdee394bccadcd0abe19dd4623ed978a331ad 2025-08-14T21:22:30.6333784Z * [new tag] trunk/2e4e5ab4be9e0aeffd9c49b5b2f9f820bd0895b1 -> trunk/2e4e5ab4be9e0aeffd9c49b5b2f9f820bd0895b1 2025-08-14T21:22:30.6334099Z * [new tag] trunk/2ea40fba841b3af8103f332ba62e54f350ba9a51 -> trunk/2ea40fba841b3af8103f332ba62e54f350ba9a51 2025-08-14T21:22:30.6334451Z * [new tag] trunk/2ee22e435131369a7e4f8cc4732579acc29a941b -> trunk/2ee22e435131369a7e4f8cc4732579acc29a941b 2025-08-14T21:22:30.6334833Z * [new tag] trunk/2f4c2226175512af787725c4d5ad7313c60d4db1 -> trunk/2f4c2226175512af787725c4d5ad7313c60d4db1 2025-08-14T21:22:30.6335119Z * [new tag] trunk/3008d985a8fc155eb89374afff50cb33a6bd10d5 -> trunk/3008d985a8fc155eb89374afff50cb33a6bd10d5 2025-08-14T21:22:30.6335428Z * [new tag] trunk/3028fa6ce9d9c96671722ab8213a1a30670d7cf2 -> trunk/3028fa6ce9d9c96671722ab8213a1a30670d7cf2 2025-08-14T21:22:30.6335777Z * [new tag] trunk/303c614f3df95ae2b659c5f6c1838b14e4776ce6 -> trunk/303c614f3df95ae2b659c5f6c1838b14e4776ce6 2025-08-14T21:22:30.6336054Z * [new tag] trunk/305fa2239365ad17ac9c534a68bba8a149c42d67 -> trunk/305fa2239365ad17ac9c534a68bba8a149c42d67 2025-08-14T21:22:30.6336466Z * [new tag] trunk/31c9ac4319c0cc2ed8c6be701c6ccf73f6cb4706 -> trunk/31c9ac4319c0cc2ed8c6be701c6ccf73f6cb4706 2025-08-14T21:22:30.6336817Z * [new tag] trunk/32099961d588fc19ead8afe805d6b5108de75669 -> trunk/32099961d588fc19ead8afe805d6b5108de75669 2025-08-14T21:22:30.6341370Z * [new tag] trunk/32e5e2f596d55bb9441d5d53f3c58bcb55828047 -> trunk/32e5e2f596d55bb9441d5d53f3c58bcb55828047 2025-08-14T21:22:30.6346594Z * [new tag] trunk/334b38ccc4427b1d14981c48a3a0b92180d58225 -> trunk/334b38ccc4427b1d14981c48a3a0b92180d58225 2025-08-14T21:22:30.6347352Z * [new tag] trunk/334ecbd4ffe11858cae7d23d1190ddb4777c2513 -> trunk/334ecbd4ffe11858cae7d23d1190ddb4777c2513 2025-08-14T21:22:30.6348109Z * [new tag] trunk/33d94018668951611b318b7515ae96f04e48eac0 -> trunk/33d94018668951611b318b7515ae96f04e48eac0 2025-08-14T21:22:30.6349221Z * [new tag] trunk/34358f335d95213d96b6cca6a83e7bf3af6a9fcb -> trunk/34358f335d95213d96b6cca6a83e7bf3af6a9fcb 2025-08-14T21:22:30.6350180Z * [new tag] trunk/34ec5ed275f8aa875c80daa97b3e82af0b06f673 -> trunk/34ec5ed275f8aa875c80daa97b3e82af0b06f673 2025-08-14T21:22:30.6350987Z * [new tag] trunk/355462e1278d818deb9ef4a184073d5b66074816 -> trunk/355462e1278d818deb9ef4a184073d5b66074816 2025-08-14T21:22:30.6357714Z * [new tag] trunk/3626ba711b34397d1fbf0a9b1979f85cbf68b919 -> trunk/3626ba711b34397d1fbf0a9b1979f85cbf68b919 2025-08-14T21:22:30.6358405Z * [new tag] trunk/36f46d082a4954921cb8493223f000f2aab79ed7 -> trunk/36f46d082a4954921cb8493223f000f2aab79ed7 2025-08-14T21:22:30.6359157Z * [new tag] trunk/39aa3d1471549b7829c207d634dfdc1d26e346a2 -> trunk/39aa3d1471549b7829c207d634dfdc1d26e346a2 2025-08-14T21:22:30.6360048Z * [new tag] trunk/3a562374401113187ce2566b87e3f1d87d7c53aa -> trunk/3a562374401113187ce2566b87e3f1d87d7c53aa 2025-08-14T21:22:30.6360884Z * [new tag] trunk/3ac86e728dfaa7383ff7f865e9e7d33486188dae -> trunk/3ac86e728dfaa7383ff7f865e9e7d33486188dae 2025-08-14T21:22:30.6361576Z * [new tag] trunk/3be70dc30e893b552fc0f23ca06cd8f7949b6d08 -> trunk/3be70dc30e893b552fc0f23ca06cd8f7949b6d08 2025-08-14T21:22:30.6362415Z * [new tag] trunk/3cec82a7e9aea040a34dd7a2587ae6d3bd65dba0 -> trunk/3cec82a7e9aea040a34dd7a2587ae6d3bd65dba0 2025-08-14T21:22:30.6363078Z * [new tag] trunk/3cf7b4024ef83e44e9ae223dbff7c7ab68240cb2 -> trunk/3cf7b4024ef83e44e9ae223dbff7c7ab68240cb2 2025-08-14T21:22:30.6364149Z * [new tag] trunk/3ef2e1ef769582a82c6ddf150e9d11bf4bf1c44f -> trunk/3ef2e1ef769582a82c6ddf150e9d11bf4bf1c44f 2025-08-14T21:22:30.6364853Z * [new tag] trunk/3f1636ebef9b45e8a3cb0eb20d327ee6acb74be0 -> trunk/3f1636ebef9b45e8a3cb0eb20d327ee6acb74be0 2025-08-14T21:22:30.6365582Z * [new tag] trunk/3faee0a6318afcbbbb48687009a459214910d820 -> trunk/3faee0a6318afcbbbb48687009a459214910d820 2025-08-14T21:22:30.6370372Z * [new tag] trunk/3fcd79e023da7156ac584992ebab29205d3b7881 -> trunk/3fcd79e023da7156ac584992ebab29205d3b7881 2025-08-14T21:22:30.6370788Z * [new tag] trunk/3fe19a7a0af3f4d692af30476c320be18c7e8ae6 -> trunk/3fe19a7a0af3f4d692af30476c320be18c7e8ae6 2025-08-14T21:22:30.6371199Z * [new tag] trunk/41673110cd7c5960824cc74a6fcaeda1a8bc7a23 -> trunk/41673110cd7c5960824cc74a6fcaeda1a8bc7a23 2025-08-14T21:22:30.6371536Z * [new tag] trunk/4183d4ff3dcc1d87400326a9a7998c3f9e966f60 -> trunk/4183d4ff3dcc1d87400326a9a7998c3f9e966f60 2025-08-14T21:22:30.6371834Z * [new tag] trunk/422bd6808bb98cbbac31d157d9c82ad11ba9732d -> trunk/422bd6808bb98cbbac31d157d9c82ad11ba9732d 2025-08-14T21:22:30.6372121Z * [new tag] trunk/42e51cd4b3973a053fcfa80878a3f346fd158e9f -> trunk/42e51cd4b3973a053fcfa80878a3f346fd158e9f 2025-08-14T21:22:30.6372406Z * [new tag] trunk/4416433c7c625127b7f975c92f8ec98ea4c67fd3 -> trunk/4416433c7c625127b7f975c92f8ec98ea4c67fd3 2025-08-14T21:22:30.6372690Z * [new tag] trunk/45ba7ecda876685b083cbbe932450560c566826b -> trunk/45ba7ecda876685b083cbbe932450560c566826b 2025-08-14T21:22:30.6372981Z * [new tag] trunk/47a1db823dfcdacdb99f317428fc3791a18c5812 -> trunk/47a1db823dfcdacdb99f317428fc3791a18c5812 2025-08-14T21:22:30.6373630Z * [new tag] trunk/4a773e1e867f28a8ff0b15203e5cd9548f74fcee -> trunk/4a773e1e867f28a8ff0b15203e5cd9548f74fcee 2025-08-14T21:22:30.6374332Z * [new tag] trunk/4a90dc0c1f68d1f98832b169f792ed1bb195a0f3 -> trunk/4a90dc0c1f68d1f98832b169f792ed1bb195a0f3 2025-08-14T21:22:30.6375085Z * [new tag] trunk/4cde0acc0e4e795e1a12cbdd9b93c8c04c1fa05d -> trunk/4cde0acc0e4e795e1a12cbdd9b93c8c04c1fa05d 2025-08-14T21:22:30.6375871Z * [new tag] trunk/4d419a74610c32b1372f8802dcc61893740a23cf -> trunk/4d419a74610c32b1372f8802dcc61893740a23cf 2025-08-14T21:22:30.6376598Z * [new tag] trunk/4d5b3f2d5af7c8e4f41da4ffca53fafe8bb86235 -> trunk/4d5b3f2d5af7c8e4f41da4ffca53fafe8bb86235 2025-08-14T21:22:30.6377451Z * [new tag] trunk/4e2ddb5db67617f9f5309c8bba0c17adc84cadbc -> trunk/4e2ddb5db67617f9f5309c8bba0c17adc84cadbc 2025-08-14T21:22:30.6378273Z * [new tag] trunk/50a8c118754a6c5a46968f5c8e215ccba6831d42 -> trunk/50a8c118754a6c5a46968f5c8e215ccba6831d42 2025-08-14T21:22:30.6379032Z * [new tag] trunk/50f23ff6f883db5021dd6bab4c146434f98dd15d -> trunk/50f23ff6f883db5021dd6bab4c146434f98dd15d 2025-08-14T21:22:30.6379768Z * [new tag] trunk/515cb70367e84fcbad23fcc5b39eb1d7706df2aa -> trunk/515cb70367e84fcbad23fcc5b39eb1d7706df2aa 2025-08-14T21:22:30.6389405Z * [new tag] trunk/53e39494958b7e2278cc8176f63636e812e8945f -> trunk/53e39494958b7e2278cc8176f63636e812e8945f 2025-08-14T21:22:30.6389698Z * [new tag] trunk/556e2a73f4f0643f7c2aeb5c7dddda43388a40ce -> trunk/556e2a73f4f0643f7c2aeb5c7dddda43388a40ce 2025-08-14T21:22:30.6390101Z * [new tag] trunk/5665dc9ab76b84d7c90d845ffb0f6349b3621919 -> trunk/5665dc9ab76b84d7c90d845ffb0f6349b3621919 2025-08-14T21:22:30.6390388Z * [new tag] trunk/566c6d52ef1411c8262d7b9cf85e2044fdfbe1a3 -> trunk/566c6d52ef1411c8262d7b9cf85e2044fdfbe1a3 2025-08-14T21:22:30.6390777Z * [new tag] trunk/56c828bef93eada0e18d2cc013207831ca80cc99 -> trunk/56c828bef93eada0e18d2cc013207831ca80cc99 2025-08-14T21:22:30.6391050Z * [new tag] trunk/5737372862253a0ac0292407a5844796f02380ad -> trunk/5737372862253a0ac0292407a5844796f02380ad 2025-08-14T21:22:30.6391335Z * [new tag] trunk/57f738b6357cc8fcdde479a0948e723809a1a44d -> trunk/57f738b6357cc8fcdde479a0948e723809a1a44d 2025-08-14T21:22:30.6391713Z * [new tag] trunk/5a40c5784482255b9baf14086cc4b9349fc6d512 -> trunk/5a40c5784482255b9baf14086cc4b9349fc6d512 2025-08-14T21:22:30.6392005Z * [new tag] trunk/5a9c4cfce42b9eb87da0de40c5633f083115c307 -> trunk/5a9c4cfce42b9eb87da0de40c5633f083115c307 2025-08-14T21:22:30.6392472Z * [new tag] trunk/5ace061254af71aa83d1baae81aa1864c9746add -> trunk/5ace061254af71aa83d1baae81aa1864c9746add 2025-08-14T21:22:30.6392781Z * [new tag] trunk/5dddcd5b07c6644efca8d613f4eca1dc95daa87f -> trunk/5dddcd5b07c6644efca8d613f4eca1dc95daa87f 2025-08-14T21:22:30.6393367Z * [new tag] trunk/5ed4f9177907fe403ec4c4499d0d0e9be6b68fcf -> trunk/5ed4f9177907fe403ec4c4499d0d0e9be6b68fcf 2025-08-14T21:22:30.6394206Z * [new tag] trunk/5f1010fbb3850d99c8fdf9a9de2f79260cdc586a -> trunk/5f1010fbb3850d99c8fdf9a9de2f79260cdc586a 2025-08-14T21:22:30.6394872Z * [new tag] trunk/5f5f508aa836a46dfe88857fb223049616b94e93 -> trunk/5f5f508aa836a46dfe88857fb223049616b94e93 2025-08-14T21:22:30.6399645Z * [new tag] trunk/62bac0798100e0e06a86b7a4cee1788413e3d0ca -> trunk/62bac0798100e0e06a86b7a4cee1788413e3d0ca 2025-08-14T21:22:30.6399995Z * [new tag] trunk/63654ba4c5178fd12220cfc9d1c878af2fdd07cc -> trunk/63654ba4c5178fd12220cfc9d1c878af2fdd07cc 2025-08-14T21:22:30.6400282Z * [new tag] trunk/639778b3ee3b80e0894367fdc4442b58ae1b3a62 -> trunk/639778b3ee3b80e0894367fdc4442b58ae1b3a62 2025-08-14T21:22:30.6400550Z * [new tag] trunk/641ee7478150f26969968f49d8b358e199679a8a -> trunk/641ee7478150f26969968f49d8b358e199679a8a 2025-08-14T21:22:30.6400835Z * [new tag] trunk/65053c03a3d209060cb239d20a229dac37cf9dd1 -> trunk/65053c03a3d209060cb239d20a229dac37cf9dd1 2025-08-14T21:22:30.6401181Z * [new tag] trunk/652a6f5954d039d61dc6e6575ccf89d385d74537 -> trunk/652a6f5954d039d61dc6e6575ccf89d385d74537 2025-08-14T21:22:30.6401474Z * [new tag] trunk/685f15dbea66e8ffa8564752f81ad2f6cb447a14 -> trunk/685f15dbea66e8ffa8564752f81ad2f6cb447a14 2025-08-14T21:22:30.6401791Z * [new tag] trunk/68a4b4b2e336cfd4451ce6546d900568e5ddf96c -> trunk/68a4b4b2e336cfd4451ce6546d900568e5ddf96c 2025-08-14T21:22:30.6402379Z * [new tag] trunk/69a0a9aa7f5e320a02e97fa789d2f72baff1554f -> trunk/69a0a9aa7f5e320a02e97fa789d2f72baff1554f 2025-08-14T21:22:30.6403179Z * [new tag] trunk/6be6d06295c870c77a6eb69f96b3170d983520d5 -> trunk/6be6d06295c870c77a6eb69f96b3170d983520d5 2025-08-14T21:22:30.6404027Z * [new tag] trunk/6c05ea6475beaf3acc05e1bda0f3f8fe3bdc1d49 -> trunk/6c05ea6475beaf3acc05e1bda0f3f8fe3bdc1d49 2025-08-14T21:22:30.6404850Z * [new tag] trunk/6da11d9aafc0d84dc7f66030c181608ff2614f66 -> trunk/6da11d9aafc0d84dc7f66030c181608ff2614f66 2025-08-14T21:22:30.6405648Z * [new tag] trunk/6e8865fbc161270e2ffc52817e6c667df417a3f7 -> trunk/6e8865fbc161270e2ffc52817e6c667df417a3f7 2025-08-14T21:22:30.6406465Z * [new tag] trunk/6ea8376f84232048d6be0f7b2edf82aec1b61d58 -> trunk/6ea8376f84232048d6be0f7b2edf82aec1b61d58 2025-08-14T21:22:30.6407214Z * [new tag] trunk/6ee175195ac7853734d64704171993cc6265eb38 -> trunk/6ee175195ac7853734d64704171993cc6265eb38 2025-08-14T21:22:30.6408004Z * [new tag] trunk/6f0f4e0c3eacd479864319127915f869f64e1935 -> trunk/6f0f4e0c3eacd479864319127915f869f64e1935 2025-08-14T21:22:30.6408644Z * [new tag] trunk/70ccdec44b89e355a2cb03ba14a634284f7750f8 -> trunk/70ccdec44b89e355a2cb03ba14a634284f7750f8 2025-08-14T21:22:30.6417964Z * [new tag] trunk/72009ec6bebca7714f99c18449183787f202af4d -> trunk/72009ec6bebca7714f99c18449183787f202af4d 2025-08-14T21:22:30.6418359Z * [new tag] trunk/731ee31f7b6ba19307daab323f6196172b71aaf8 -> trunk/731ee31f7b6ba19307daab323f6196172b71aaf8 2025-08-14T21:22:30.6418749Z * [new tag] trunk/76a0609b6bddb2bc40f1eb4ade12885023653d59 -> trunk/76a0609b6bddb2bc40f1eb4ade12885023653d59 2025-08-14T21:22:30.6419138Z * [new tag] trunk/781e9a7724c47496e3d38a81e6dd6194cf098c41 -> trunk/781e9a7724c47496e3d38a81e6dd6194cf098c41 2025-08-14T21:22:30.6419578Z * [new tag] trunk/78a2fe1d42edeaa2ef7020b0fa0ac82ee4a640e4 -> trunk/78a2fe1d42edeaa2ef7020b0fa0ac82ee4a640e4 2025-08-14T21:22:30.6419876Z * [new tag] trunk/7a974a88f2c529a614baeabe4debd00fc8a3b299 -> trunk/7a974a88f2c529a614baeabe4debd00fc8a3b299 2025-08-14T21:22:30.6420159Z * [new tag] trunk/7ae0629d64b404e0ef5d9c931433ad25e65d6114 -> trunk/7ae0629d64b404e0ef5d9c931433ad25e65d6114 2025-08-14T21:22:30.6420453Z * [new tag] trunk/7d2ec704e47f4b740cdecda5534b305e8e1875ef -> trunk/7d2ec704e47f4b740cdecda5534b305e8e1875ef 2025-08-14T21:22:30.6420733Z * [new tag] trunk/7d87e358ac8440f666fabbfd99058bb5342be6ac -> trunk/7d87e358ac8440f666fabbfd99058bb5342be6ac 2025-08-14T21:22:30.6421072Z * [new tag] trunk/7e27347fd353928c99620495c8c531a5eba7d56b -> trunk/7e27347fd353928c99620495c8c531a5eba7d56b 2025-08-14T21:22:30.6422060Z * [new tag] trunk/7e91394955721c77645fcdb75a5d47a255d65020 -> trunk/7e91394955721c77645fcdb75a5d47a255d65020 2025-08-14T21:22:30.6422802Z * [new tag] trunk/7f4cb4a3e018a621add2a37a3a2f67b982d51001 -> trunk/7f4cb4a3e018a621add2a37a3a2f67b982d51001 2025-08-14T21:22:30.6423573Z * [new tag] trunk/7fbc22855c17741ae016992803b2e147a13aa22d -> trunk/7fbc22855c17741ae016992803b2e147a13aa22d 2025-08-14T21:22:30.6428430Z * [new tag] trunk/8047421fbb607d70ede13b9cd5a60b7b8bdfe348 -> trunk/8047421fbb607d70ede13b9cd5a60b7b8bdfe348 2025-08-14T21:22:30.6428719Z * [new tag] trunk/8088cfa592504a2897b4c78f8a46fe658ab5c2c2 -> trunk/8088cfa592504a2897b4c78f8a46fe658ab5c2c2 2025-08-14T21:22:30.6428993Z * [new tag] trunk/80cca8307943ba64168208b54028f55b2c71daff -> trunk/80cca8307943ba64168208b54028f55b2c71daff 2025-08-14T21:22:30.6429276Z * [new tag] trunk/8147370733bbdcd034cad54e9212e51885a11892 -> trunk/8147370733bbdcd034cad54e9212e51885a11892 2025-08-14T21:22:30.6429557Z * [new tag] trunk/83875cdb5594ccb3c9206b8eb5745fe1d011cf26 -> trunk/83875cdb5594ccb3c9206b8eb5745fe1d011cf26 2025-08-14T21:22:30.6429887Z * [new tag] trunk/8399cf88ce8399d2be93355f29d4cb69f51c0654 -> trunk/8399cf88ce8399d2be93355f29d4cb69f51c0654 2025-08-14T21:22:30.6430193Z * [new tag] trunk/842cc77ab9aafd518593c2fce077d6abb42a5b7f -> trunk/842cc77ab9aafd518593c2fce077d6abb42a5b7f 2025-08-14T21:22:30.6430476Z * [new tag] trunk/85db508af533649d0b3447ff3f0d5fe083150c84 -> trunk/85db508af533649d0b3447ff3f0d5fe083150c84 2025-08-14T21:22:30.6430772Z * [new tag] trunk/86eb65f7f06016bcd5d7951dc9d74bc3993a827a -> trunk/86eb65f7f06016bcd5d7951dc9d74bc3993a827a 2025-08-14T21:22:30.6431763Z * [new tag] trunk/87e6c4079d8ec7d04aff00ed82096b39836a8367 -> trunk/87e6c4079d8ec7d04aff00ed82096b39836a8367 2025-08-14T21:22:30.6432443Z * [new tag] trunk/89654db1abccf7e5f261989a150db4d1619ea2aa -> trunk/89654db1abccf7e5f261989a150db4d1619ea2aa 2025-08-14T21:22:30.6433079Z * [new tag] trunk/8a37f0c90392a2c38b7c5955471fa49edcaf5cb1 -> trunk/8a37f0c90392a2c38b7c5955471fa49edcaf5cb1 2025-08-14T21:22:30.6433824Z * [new tag] trunk/8ab5868a2199fe485c2d66533b9244ccb97e487d -> trunk/8ab5868a2199fe485c2d66533b9244ccb97e487d 2025-08-14T21:22:30.6434620Z * [new tag] trunk/8ae4d2652f64b8444b3d5314b9232bd2119bcde6 -> trunk/8ae4d2652f64b8444b3d5314b9232bd2119bcde6 2025-08-14T21:22:30.6435378Z * [new tag] trunk/8c41cb800ae0411f02ea5da34bd5ccc3790633b0 -> trunk/8c41cb800ae0411f02ea5da34bd5ccc3790633b0 2025-08-14T21:22:30.6436202Z * [new tag] trunk/8cb91e20bc205b1416648d0ffd98d1ba1f3a6fc4 -> trunk/8cb91e20bc205b1416648d0ffd98d1ba1f3a6fc4 2025-08-14T21:22:30.6436960Z * [new tag] trunk/8cfaf51d4e29c9bd9f49ecc98d955ed53df1a13d -> trunk/8cfaf51d4e29c9bd9f49ecc98d955ed53df1a13d 2025-08-14T21:22:30.6437745Z * [new tag] trunk/8d1cf529229dce7cd5ea04abb0faac83b87ca6d1 -> trunk/8d1cf529229dce7cd5ea04abb0faac83b87ca6d1 2025-08-14T21:22:30.6438388Z * [new tag] trunk/8d3d1c844303cb1d46123a1caa76d4cf83973347 -> trunk/8d3d1c844303cb1d46123a1caa76d4cf83973347 2025-08-14T21:22:30.6443681Z * [new tag] trunk/8d6d3246316e1767a57d5e855acd6208da753b75 -> trunk/8d6d3246316e1767a57d5e855acd6208da753b75 2025-08-14T21:22:30.6444468Z * [new tag] trunk/8e6a3138581152ab827a0997f34c470271399f5e -> trunk/8e6a3138581152ab827a0997f34c470271399f5e 2025-08-14T21:22:30.6445298Z * [new tag] trunk/8eee08d2279b98af2522debb6512d37e837e89e3 -> trunk/8eee08d2279b98af2522debb6512d37e837e89e3 2025-08-14T21:22:30.6446125Z * [new tag] trunk/90b78ee50f73b5c963996076a3d54b74b1b965be -> trunk/90b78ee50f73b5c963996076a3d54b74b1b965be 2025-08-14T21:22:30.6446764Z * [new tag] trunk/94b91a876327820a4bb6f5d39d156f13f2553ab6 -> trunk/94b91a876327820a4bb6f5d39d156f13f2553ab6 2025-08-14T21:22:30.6447781Z * [new tag] trunk/95210cc409dd578988c7116b47725c304dea54c7 -> trunk/95210cc409dd578988c7116b47725c304dea54c7 2025-08-14T21:22:30.6448438Z * [new tag] trunk/96bd33b2de79598566df395f32e27c4d33673f05 -> trunk/96bd33b2de79598566df395f32e27c4d33673f05 2025-08-14T21:22:30.6449651Z * [new tag] trunk/9708fcf92db88b80b9010c68662d634434da3106 -> trunk/9708fcf92db88b80b9010c68662d634434da3106 2025-08-14T21:22:30.6450540Z * [new tag] trunk/97c8c98f8dcb9c5c188b691d156e0043dba6c7f8 -> trunk/97c8c98f8dcb9c5c188b691d156e0043dba6c7f8 2025-08-14T21:22:30.6451401Z * [new tag] trunk/9903ca4f70bdc1653016256f5b4fd74fdfc609f8 -> trunk/9903ca4f70bdc1653016256f5b4fd74fdfc609f8 2025-08-14T21:22:30.6452175Z * [new tag] trunk/99bc2f94c1955657e950ebdad5f77e518785ccbd -> trunk/99bc2f94c1955657e950ebdad5f77e518785ccbd 2025-08-14T21:22:30.6453008Z * [new tag] trunk/9a06e6d0310da9d8a59ae05e8ec9c0201b55cacd -> trunk/9a06e6d0310da9d8a59ae05e8ec9c0201b55cacd 2025-08-14T21:22:30.6461597Z * [new tag] trunk/9a0f7a3bb01b235ea04581ee540970a098071b72 -> trunk/9a0f7a3bb01b235ea04581ee540970a098071b72 2025-08-14T21:22:30.6462322Z * [new tag] trunk/9b803cdbe298009f08340c1aaccb25aafbca95d8 -> trunk/9b803cdbe298009f08340c1aaccb25aafbca95d8 2025-08-14T21:22:30.6462952Z * [new tag] trunk/9ccd0f5e31ea54fcf42101dfbaacc103494e34df -> trunk/9ccd0f5e31ea54fcf42101dfbaacc103494e34df 2025-08-14T21:22:30.6463519Z * [new tag] trunk/9d37c960a4fc44d5ac334ca8bf775f85b95d76fc -> trunk/9d37c960a4fc44d5ac334ca8bf775f85b95d76fc 2025-08-14T21:22:30.6464064Z * [new tag] trunk/9e07673deb212c87b1c6fea23799a97474c476ed -> trunk/9e07673deb212c87b1c6fea23799a97474c476ed 2025-08-14T21:22:30.6464598Z * [new tag] trunk/9eedd2a20b64302d0d116ea2802b50948d2ebb09 -> trunk/9eedd2a20b64302d0d116ea2802b50948d2ebb09 2025-08-14T21:22:30.6465146Z * [new tag] trunk/9fa8ce26cf638504469852cbc3e7d04579fc8674 -> trunk/9fa8ce26cf638504469852cbc3e7d04579fc8674 2025-08-14T21:22:30.6465628Z * [new tag] trunk/a06ec54d40013c97fbffc174ea8f524ea5a95715 -> trunk/a06ec54d40013c97fbffc174ea8f524ea5a95715 2025-08-14T21:22:30.6466169Z * [new tag] trunk/a288b15ea9f87ddd665f249d492e0fb0861f5a69 -> trunk/a288b15ea9f87ddd665f249d492e0fb0861f5a69 2025-08-14T21:22:30.6466741Z * [new tag] trunk/a2fd106d670bb4990cebfd00f25ecbae4145e76c -> trunk/a2fd106d670bb4990cebfd00f25ecbae4145e76c 2025-08-14T21:22:30.6471403Z * [new tag] trunk/a354fa91e26b376d96385a2206c5ff5b42aa4600 -> trunk/a354fa91e26b376d96385a2206c5ff5b42aa4600 2025-08-14T21:22:30.6471782Z * [new tag] trunk/a4f69a5da08eace1c1e6469dec6a18aa842da73b -> trunk/a4f69a5da08eace1c1e6469dec6a18aa842da73b 2025-08-14T21:22:30.6472278Z * [new tag] trunk/a53d14d5f846ac44f6c205abb1c5bc4d2f3126ae -> trunk/a53d14d5f846ac44f6c205abb1c5bc4d2f3126ae 2025-08-14T21:22:30.6473478Z * [new tag] trunk/a5652407e4f3d772fc44486ac2abf756decf0861 -> trunk/a5652407e4f3d772fc44486ac2abf756decf0861 2025-08-14T21:22:30.6474437Z * [new tag] trunk/a7abf57aabec0ce686092e2d66e53ba185dbc56b -> trunk/a7abf57aabec0ce686092e2d66e53ba185dbc56b 2025-08-14T21:22:30.6475341Z * [new tag] trunk/a84b60c0c4016785fd93b7b8a0c04f2d0770d332 -> trunk/a84b60c0c4016785fd93b7b8a0c04f2d0770d332 2025-08-14T21:22:30.6476408Z * [new tag] trunk/aa75e917bdb0f95bb6dee81853c2d3c4ab3e1883 -> trunk/aa75e917bdb0f95bb6dee81853c2d3c4ab3e1883 2025-08-14T21:22:30.6477449Z * [new tag] trunk/adcca7d9a1c053495e99012de801b2ea237faad0 -> trunk/adcca7d9a1c053495e99012de801b2ea237faad0 2025-08-14T21:22:30.6478617Z * [new tag] trunk/af10f1f86cc4effc93142a447693d8be55966615 -> trunk/af10f1f86cc4effc93142a447693d8be55966615 2025-08-14T21:22:30.6479823Z * [new tag] trunk/af3cabc55d5699f4da528e1ca39d83338f84ae8c -> trunk/af3cabc55d5699f4da528e1ca39d83338f84ae8c 2025-08-14T21:22:30.6480999Z * [new tag] trunk/b0df7715e8c590c0001d1f9cdb97057be80c9107 -> trunk/b0df7715e8c590c0001d1f9cdb97057be80c9107 2025-08-14T21:22:30.6482469Z * [new tag] trunk/b149c7204c218e7c4d6594a89dd74f72bd480ec5 -> trunk/b149c7204c218e7c4d6594a89dd74f72bd480ec5 2025-08-14T21:22:30.6483695Z * [new tag] trunk/b1a602762e6a6674b406a3137e7e7a678885a97b -> trunk/b1a602762e6a6674b406a3137e7e7a678885a97b 2025-08-14T21:22:30.6486561Z * [new tag] trunk/b1f43548cad8fc0e30bda250f6e196310fa7a4bc -> trunk/b1f43548cad8fc0e30bda250f6e196310fa7a4bc 2025-08-14T21:22:30.6487789Z * [new tag] trunk/b219ca2a00a305753c4f1ea4c9c5d23243d54753 -> trunk/b219ca2a00a305753c4f1ea4c9c5d23243d54753 2025-08-14T21:22:30.6488984Z * [new tag] trunk/b4596895b9d85a686c2cb978938b0a7797b3690a -> trunk/b4596895b9d85a686c2cb978938b0a7797b3690a 2025-08-14T21:22:30.6489778Z * [new tag] trunk/b5fd7223b1bf44720dc9183bda7dfcf7aeccff02 -> trunk/b5fd7223b1bf44720dc9183bda7dfcf7aeccff02 2025-08-14T21:22:30.6491080Z * [new tag] trunk/b602ea9cab7d43a7ee7b4051227090f23fbd3dbf -> trunk/b602ea9cab7d43a7ee7b4051227090f23fbd3dbf 2025-08-14T21:22:30.6492307Z * [new tag] trunk/b6b74aed604bd2e96389ff99aaaf39abc64fdc64 -> trunk/b6b74aed604bd2e96389ff99aaaf39abc64fdc64 2025-08-14T21:22:30.6493522Z * [new tag] trunk/b7db86600a2614adc71c92ca42d359a7ac534d78 -> trunk/b7db86600a2614adc71c92ca42d359a7ac534d78 2025-08-14T21:22:30.6494686Z * [new tag] trunk/b9003ed3d87699e81e436719625a21996a6654e5 -> trunk/b9003ed3d87699e81e436719625a21996a6654e5 2025-08-14T21:22:30.6495864Z * [new tag] trunk/b90feeac86bda00afc2789321bcd706015ff44e3 -> trunk/b90feeac86bda00afc2789321bcd706015ff44e3 2025-08-14T21:22:30.6504927Z * [new tag] trunk/b9d7de3a094598c3dc0dd52e57bce30eb684c9d8 -> trunk/b9d7de3a094598c3dc0dd52e57bce30eb684c9d8 2025-08-14T21:22:30.6506234Z * [new tag] trunk/ba47821f524eee50a214ed39fa2e7765d54aabf4 -> trunk/ba47821f524eee50a214ed39fa2e7765d54aabf4 2025-08-14T21:22:30.6507558Z * [new tag] trunk/ba4ccf5d67e3d237f435eacc2bce3c6025f08491 -> trunk/ba4ccf5d67e3d237f435eacc2bce3c6025f08491 2025-08-14T21:22:30.6508818Z * [new tag] trunk/bcf23ecc476df2bd7479f142567213e2623308ee -> trunk/bcf23ecc476df2bd7479f142567213e2623308ee 2025-08-14T21:22:30.6509958Z * [new tag] trunk/be53f609aaf6f01e2863f490975ea9eaac3ee9ff -> trunk/be53f609aaf6f01e2863f490975ea9eaac3ee9ff 2025-08-14T21:22:30.6511181Z * [new tag] trunk/beb4d7816dedc67a5de1f82e5a45b5910f407941 -> trunk/beb4d7816dedc67a5de1f82e5a45b5910f407941 2025-08-14T21:22:30.6511801Z * [new tag] trunk/bfc873d02ec413344717493e4175a902921359fd -> trunk/bfc873d02ec413344717493e4175a902921359fd 2025-08-14T21:22:30.6512482Z * [new tag] trunk/c184cb3852f0ff2d16a489d61abc3739c309e6ca -> trunk/c184cb3852f0ff2d16a489d61abc3739c309e6ca 2025-08-14T21:22:30.6513118Z * [new tag] trunk/c24ca7f4bf79f62fd623d76346ca27e53f731431 -> trunk/c24ca7f4bf79f62fd623d76346ca27e53f731431 2025-08-14T21:22:30.6513738Z * [new tag] trunk/c3dc8dc4122977893004c49d10e4676cd0a97da4 -> trunk/c3dc8dc4122977893004c49d10e4676cd0a97da4 2025-08-14T21:22:30.6514357Z * [new tag] trunk/c5ec5458a547f7a774468ea0eb2258d3de596492 -> trunk/c5ec5458a547f7a774468ea0eb2258d3de596492 2025-08-14T21:22:30.6514993Z * [new tag] trunk/c5efc5c8a66eca84865015058b3221013ebfe685 -> trunk/c5efc5c8a66eca84865015058b3221013ebfe685 2025-08-14T21:22:30.6516008Z * [new tag] trunk/c6563341208003f64c131854a9cf029555f786d2 -> trunk/c6563341208003f64c131854a9cf029555f786d2 2025-08-14T21:22:30.6516620Z * [new tag] trunk/c6d78d4dbda53837d298d23a5fbc09af90a42d9e -> trunk/c6d78d4dbda53837d298d23a5fbc09af90a42d9e 2025-08-14T21:22:30.6517245Z * [new tag] trunk/c8205cb35435f39d2c26f6c94b45e4adeb6dcb23 -> trunk/c8205cb35435f39d2c26f6c94b45e4adeb6dcb23 2025-08-14T21:22:30.6517870Z * [new tag] trunk/c859ba7114b1fcb49527e090745fa17091d1f8d5 -> trunk/c859ba7114b1fcb49527e090745fa17091d1f8d5 2025-08-14T21:22:30.6518836Z * [new tag] trunk/c86040a8e68f754b90a84099187d3624954c7f36 -> trunk/c86040a8e68f754b90a84099187d3624954c7f36 2025-08-14T21:22:30.6520031Z * [new tag] trunk/c9671dc865aa0fc1cb86df754e355b44d8e02bb4 -> trunk/c9671dc865aa0fc1cb86df754e355b44d8e02bb4 2025-08-14T21:22:30.6521224Z * [new tag] trunk/ca7315c17162ea21b1ca5ba23f4bf6168766c7b9 -> trunk/ca7315c17162ea21b1ca5ba23f4bf6168766c7b9 2025-08-14T21:22:30.6522375Z * [new tag] trunk/cae2b5e3d223829bdc553fc8601df4b1c1554cff -> trunk/cae2b5e3d223829bdc553fc8601df4b1c1554cff 2025-08-14T21:22:30.6523537Z * [new tag] trunk/cbffde774557752cf20447d42d99ec6102673c31 -> trunk/cbffde774557752cf20447d42d99ec6102673c31 2025-08-14T21:22:30.6524796Z * [new tag] trunk/cd8d8c18f5bafdc1c73d5ac0129e7b4d76ab45bc -> trunk/cd8d8c18f5bafdc1c73d5ac0129e7b4d76ab45bc 2025-08-14T21:22:30.6525691Z * [new tag] trunk/cf0a0dcb0afa5e84b95461cc542f862b51ca96bf -> trunk/cf0a0dcb0afa5e84b95461cc542f862b51ca96bf 2025-08-14T21:22:30.6526355Z * [new tag] trunk/cf4964be68fa9f4ffc334f01cce42d7424b1cc81 -> trunk/cf4964be68fa9f4ffc334f01cce42d7424b1cc81 2025-08-14T21:22:30.6526998Z * [new tag] trunk/d0e2240f680ea2a553f7ee8188f52482e130bfd0 -> trunk/d0e2240f680ea2a553f7ee8188f52482e130bfd0 2025-08-14T21:22:30.6527629Z * [new tag] trunk/d1950d4bb5cba8fb6b23e4d283eea5b9801737e2 -> trunk/d1950d4bb5cba8fb6b23e4d283eea5b9801737e2 2025-08-14T21:22:30.6528267Z * [new tag] trunk/d20c4c20e61adecf00335c4d8c22eb1ace472cd3 -> trunk/d20c4c20e61adecf00335c4d8c22eb1ace472cd3 2025-08-14T21:22:30.6528890Z * [new tag] trunk/d25c4f954d599ea512e2f70cd6df101c21479d4c -> trunk/d25c4f954d599ea512e2f70cd6df101c21479d4c 2025-08-14T21:22:30.6529556Z * [new tag] trunk/d3d359dbafa89173a371e2637f22b47398e94a24 -> trunk/d3d359dbafa89173a371e2637f22b47398e94a24 2025-08-14T21:22:30.6530181Z * [new tag] trunk/d46768db04499d07a5b0db984112a6d1b7d3b0c1 -> trunk/d46768db04499d07a5b0db984112a6d1b7d3b0c1 2025-08-14T21:22:30.6530799Z * [new tag] trunk/d4c1a08c89f37d249a0146ff511c82ecc5c53b8f -> trunk/d4c1a08c89f37d249a0146ff511c82ecc5c53b8f 2025-08-14T21:22:30.6531408Z * [new tag] trunk/d556586448f3caab85673c7da0978fe31c7748f7 -> trunk/d556586448f3caab85673c7da0978fe31c7748f7 2025-08-14T21:22:30.6532018Z * [new tag] trunk/d670304001429a1a833255a918ed788d7ec4989a -> trunk/d670304001429a1a833255a918ed788d7ec4989a 2025-08-14T21:22:30.6532618Z * [new tag] trunk/d6786741a77aba200c78002646cc069b7a1799b0 -> trunk/d6786741a77aba200c78002646cc069b7a1799b0 2025-08-14T21:22:30.6533370Z * [new tag] trunk/d68c323692dedcbb74e670801e3502944fd790ff -> trunk/d68c323692dedcbb74e670801e3502944fd790ff 2025-08-14T21:22:30.6534547Z * [new tag] trunk/d8cb3db5339b45e4b745b2b883ef3ecde9843e2c -> trunk/d8cb3db5339b45e4b745b2b883ef3ecde9843e2c 2025-08-14T21:22:30.6535753Z * [new tag] trunk/da1f608ca33f3062535d0a4866d95db19e72fcbd -> trunk/da1f608ca33f3062535d0a4866d95db19e72fcbd 2025-08-14T21:22:30.6536554Z * [new tag] trunk/db0b7f1cc9bb3fe71aaf8b964a644147ae8e1c35 -> trunk/db0b7f1cc9bb3fe71aaf8b964a644147ae8e1c35 2025-08-14T21:22:30.6537233Z * [new tag] trunk/db32b60662b2f2bdcad980127d5dc4b66b02a7e4 -> trunk/db32b60662b2f2bdcad980127d5dc4b66b02a7e4 2025-08-14T21:22:30.6537898Z * [new tag] trunk/db763b17175553ba09637362eb9773a91997a7ad -> trunk/db763b17175553ba09637362eb9773a91997a7ad 2025-08-14T21:22:30.6538550Z * [new tag] trunk/db78943a1ca13a32a3d6045eb15e2b719ee13a2f -> trunk/db78943a1ca13a32a3d6045eb15e2b719ee13a2f 2025-08-14T21:22:30.6552120Z * [new tag] trunk/dc0d18e023d9b7e314ebba0f234b6cb1579dbcfd -> trunk/dc0d18e023d9b7e314ebba0f234b6cb1579dbcfd 2025-08-14T21:22:30.6552923Z * [new tag] trunk/dd21c8a578038ab2841a7ba809a06921093ac9d8 -> trunk/dd21c8a578038ab2841a7ba809a06921093ac9d8 2025-08-14T21:22:30.6553581Z * [new tag] trunk/deea71a90e05eb320c04bebfead5317746637f0d -> trunk/deea71a90e05eb320c04bebfead5317746637f0d 2025-08-14T21:22:30.6554285Z * [new tag] trunk/df55ec7d4b35f6d21691e9dd41c82f27de762948 -> trunk/df55ec7d4b35f6d21691e9dd41c82f27de762948 2025-08-14T21:22:30.6555028Z * [new tag] trunk/e1cf0d496ea85d1807c8c740f296e77bf7bdc1df -> trunk/e1cf0d496ea85d1807c8c740f296e77bf7bdc1df 2025-08-14T21:22:30.6555657Z * [new tag] trunk/e248719ac03c103767ab72034f6b9fd56855bf98 -> trunk/e248719ac03c103767ab72034f6b9fd56855bf98 2025-08-14T21:22:30.6556285Z * [new tag] trunk/e49762026070f66be41bfa6537fbcf9bfc24e558 -> trunk/e49762026070f66be41bfa6537fbcf9bfc24e558 2025-08-14T21:22:30.6557061Z * [new tag] trunk/e4de93f6a3e342bab34d3757cf90ec0ccc87e168 -> trunk/e4de93f6a3e342bab34d3757cf90ec0ccc87e168 2025-08-14T21:22:30.6557705Z * [new tag] trunk/e619c6bb90b9dedaccd3cbeed86a288993a4e33f -> trunk/e619c6bb90b9dedaccd3cbeed86a288993a4e33f 2025-08-14T21:22:30.6558348Z * [new tag] trunk/e63c2b21c186a7d2ab8a8953b8aa1535f2e96e58 -> trunk/e63c2b21c186a7d2ab8a8953b8aa1535f2e96e58 2025-08-14T21:22:30.6559040Z * [new tag] trunk/e7152ff8a6a929a0db7f3f4a72a5b6d471769cd3 -> trunk/e7152ff8a6a929a0db7f3f4a72a5b6d471769cd3 2025-08-14T21:22:30.6559678Z * [new tag] trunk/e96c7c4bb0f6aeae2ab3b6f040f7d67edbec199a -> trunk/e96c7c4bb0f6aeae2ab3b6f040f7d67edbec199a 2025-08-14T21:22:30.6560308Z * [new tag] trunk/e9eb2096a59a79e7a94c3e28a0715e040369f34c -> trunk/e9eb2096a59a79e7a94c3e28a0715e040369f34c 2025-08-14T21:22:30.6560934Z * [new tag] trunk/eac2d9d695a32dd456050f45cac35134ec3809f4 -> trunk/eac2d9d695a32dd456050f45cac35134ec3809f4 2025-08-14T21:22:30.6561648Z * [new tag] trunk/ecde76c764752540edf9ef62a97936c86d984b17 -> trunk/ecde76c764752540edf9ef62a97936c86d984b17 2025-08-14T21:22:30.6562277Z * [new tag] trunk/ecea81117b2fdc52907c97b3c32d779e07b5d55b -> trunk/ecea81117b2fdc52907c97b3c32d779e07b5d55b 2025-08-14T21:22:30.6562903Z * [new tag] trunk/edaa151d0d5a4e75fbec9843f49cc78770eb61fb -> trunk/edaa151d0d5a4e75fbec9843f49cc78770eb61fb 2025-08-14T21:22:30.6563533Z * [new tag] trunk/ee1b0412b919dfb358d5a697b3be49621497fbc2 -> trunk/ee1b0412b919dfb358d5a697b3be49621497fbc2 2025-08-14T21:22:30.6564161Z * [new tag] trunk/ee1fb43450c2e985657f95a91b68328d6f20f24e -> trunk/ee1fb43450c2e985657f95a91b68328d6f20f24e 2025-08-14T21:22:30.6564863Z * [new tag] trunk/ee89cc7a0acd69de25f98fe4ef828546db7b444c -> trunk/ee89cc7a0acd69de25f98fe4ef828546db7b444c 2025-08-14T21:22:30.6565507Z * [new tag] trunk/ee9f8ba11d664b871a9e0c7933fdc8571635b78c -> trunk/ee9f8ba11d664b871a9e0c7933fdc8571635b78c 2025-08-14T21:22:30.6566137Z * [new tag] trunk/eed9dbf70f43ee529fec78ac00ed9a4fd74c6e76 -> trunk/eed9dbf70f43ee529fec78ac00ed9a4fd74c6e76 2025-08-14T21:22:30.6566776Z * [new tag] trunk/f077c2402e4eb5b0ed562b4ee5b7a0503f26ef94 -> trunk/f077c2402e4eb5b0ed562b4ee5b7a0503f26ef94 2025-08-14T21:22:30.6567413Z * [new tag] trunk/f0980fc0bbd656d6c02d23ad97e945353b314f35 -> trunk/f0980fc0bbd656d6c02d23ad97e945353b314f35 2025-08-14T21:22:30.6568054Z * [new tag] trunk/f15ada5c6fad97a7dcbfa4673f067b6942dda640 -> trunk/f15ada5c6fad97a7dcbfa4673f067b6942dda640 2025-08-14T21:22:30.6568712Z * [new tag] trunk/f27232a2134150cb5e55d26a74d8c36c6a961ca5 -> trunk/f27232a2134150cb5e55d26a74d8c36c6a961ca5 2025-08-14T21:22:30.6573534Z * [new tag] trunk/f33ce40bc062a281e1a1f57e8c1926d0a7d155cc -> trunk/f33ce40bc062a281e1a1f57e8c1926d0a7d155cc 2025-08-14T21:22:30.6574171Z * [new tag] trunk/f341077ce4710172da20cfad916ee37159bfe9fe -> trunk/f341077ce4710172da20cfad916ee37159bfe9fe 2025-08-14T21:22:30.6574810Z * [new tag] trunk/f3a4d742ece08de4cb0e59dcc62e0093a7d0b0c7 -> trunk/f3a4d742ece08de4cb0e59dcc62e0093a7d0b0c7 2025-08-14T21:22:30.6575507Z * [new tag] trunk/f3f159ff8c4bad2edec99c68a941c628e983d04c -> trunk/f3f159ff8c4bad2edec99c68a941c628e983d04c 2025-08-14T21:22:30.6576148Z * [new tag] trunk/f60454cce8b93e5bbf67f2f3c88c8ac01ed65457 -> trunk/f60454cce8b93e5bbf67f2f3c88c8ac01ed65457 2025-08-14T21:22:30.6576787Z * [new tag] trunk/f7b2f3314cf7aede67d5fa5c75e4243208484344 -> trunk/f7b2f3314cf7aede67d5fa5c75e4243208484344 2025-08-14T21:22:30.6577412Z * [new tag] trunk/f8f0414a5983ff481a2188e0c18594150430c8c5 -> trunk/f8f0414a5983ff481a2188e0c18594150430c8c5 2025-08-14T21:22:30.6578089Z * [new tag] trunk/f95b58c2844b3444cd8446fed8570729dc4216eb -> trunk/f95b58c2844b3444cd8446fed8570729dc4216eb 2025-08-14T21:22:30.6578712Z * [new tag] trunk/f990490a23815ea6ee27e487c70ba2cf513ba43d -> trunk/f990490a23815ea6ee27e487c70ba2cf513ba43d 2025-08-14T21:22:30.6579341Z * [new tag] trunk/fb887c3bb588cfe782615e67f6c26db636b8539b -> trunk/fb887c3bb588cfe782615e67f6c26db636b8539b 2025-08-14T21:22:30.6579958Z * [new tag] trunk/fc25c68f20f772290927a7031b998b92615259cf -> trunk/fc25c68f20f772290927a7031b998b92615259cf 2025-08-14T21:22:30.6580573Z * [new tag] trunk/fc80f6859e0ccf66513a40f04b9e735e759d4ddb -> trunk/fc80f6859e0ccf66513a40f04b9e735e759d4ddb 2025-08-14T21:22:30.6581215Z * [new tag] trunk/fdfd69bb05488d76123db9cc1cdd90ac4137bbfb -> trunk/fdfd69bb05488d76123db9cc1cdd90ac4137bbfb 2025-08-14T21:22:30.6581878Z * [new tag] trunk/fe3f5fe4ea2ff6f56406dc5d954636ebb08d0a08 -> trunk/fe3f5fe4ea2ff6f56406dc5d954636ebb08d0a08 2025-08-14T21:22:30.6582521Z * [new tag] trunk/fea7e9dd37c02c334b130f6624af6163fde6b2ab -> trunk/fea7e9dd37c02c334b130f6624af6163fde6b2ab 2025-08-14T21:22:30.6583190Z * [new tag] trunk/ff0d56d03592aa03f3ced8359241d21df1783393 -> trunk/ff0d56d03592aa03f3ced8359241d21df1783393 2025-08-14T21:22:30.6583752Z * [new tag] v0.1.1 -> v0.1.1 2025-08-14T21:22:30.6584063Z * [new tag] v0.1.10 -> v0.1.10 2025-08-14T21:22:30.6584363Z * [new tag] v0.1.11 -> v0.1.11 2025-08-14T21:22:30.6584652Z * [new tag] v0.1.12 -> v0.1.12 2025-08-14T21:22:30.6584951Z * [new tag] v0.1.2 -> v0.1.2 2025-08-14T21:22:30.6585245Z * [new tag] v0.1.3 -> v0.1.3 2025-08-14T21:22:30.6585592Z * [new tag] v0.1.4 -> v0.1.4 2025-08-14T21:22:30.6585877Z * [new tag] v0.1.5 -> v0.1.5 2025-08-14T21:22:30.6586169Z * [new tag] v0.1.6 -> v0.1.6 2025-08-14T21:22:30.6586454Z * [new tag] v0.1.7 -> v0.1.7 2025-08-14T21:22:30.6586739Z * [new tag] v0.1.8 -> v0.1.8 2025-08-14T21:22:30.6587013Z * [new tag] v0.1.9 -> v0.1.9 2025-08-14T21:22:30.6589883Z * [new tag] v0.2.0 -> v0.2.0 2025-08-14T21:22:30.6590332Z * [new tag] v0.3.0 -> v0.3.0 2025-08-14T21:22:30.6590712Z * [new tag] v0.3.1 -> v0.3.1 2025-08-14T21:22:30.6591032Z * [new tag] v0.4.0 -> v0.4.0 2025-08-14T21:22:30.6591380Z * [new tag] v0.4.1 -> v0.4.1 2025-08-14T21:22:30.6591719Z * [new tag] v1.0.0 -> v1.0.0 2025-08-14T21:22:30.6592041Z * [new tag] v1.0.0a0 -> v1.0.0a0 2025-08-14T21:22:30.6592456Z * [new tag] v1.0.1 -> v1.0.1 2025-08-14T21:22:30.6592761Z * [new tag] v1.0rc0 -> v1.0rc0 2025-08-14T21:22:30.6593119Z * [new tag] v1.0rc1 -> v1.0rc1 2025-08-14T21:22:30.6593454Z * [new tag] v1.1.0 -> v1.1.0 2025-08-14T21:22:30.6593754Z * [new tag] v1.1.0a0 -> v1.1.0a0 2025-08-14T21:22:30.6594175Z * [new tag] v1.10.0 -> v1.10.0 2025-08-14T21:22:30.6594468Z * [new tag] v1.10.0-rc1 -> v1.10.0-rc1 2025-08-14T21:22:30.6595035Z * [new tag] v1.10.0-rc2 -> v1.10.0-rc2 2025-08-14T21:22:30.6595548Z * [new tag] v1.10.0-rc3 -> v1.10.0-rc3 2025-08-14T21:22:30.6596307Z * [new tag] v1.10.1 -> v1.10.1 2025-08-14T21:22:30.6596900Z * [new tag] v1.10.1-rc1 -> v1.10.1-rc1 2025-08-14T21:22:30.6597477Z * [new tag] v1.10.2 -> v1.10.2 2025-08-14T21:22:30.6598179Z * [new tag] v1.10.2-rc1 -> v1.10.2-rc1 2025-08-14T21:22:30.6603319Z * [new tag] v1.11.0 -> v1.11.0 2025-08-14T21:22:30.6604062Z * [new tag] v1.11.0-rc1 -> v1.11.0-rc1 2025-08-14T21:22:30.6604888Z * [new tag] v1.11.0-rc2 -> v1.11.0-rc2 2025-08-14T21:22:30.6605739Z * [new tag] v1.11.0-rc3 -> v1.11.0-rc3 2025-08-14T21:22:30.6606828Z * [new tag] v1.11.0-rc4 -> v1.11.0-rc4 2025-08-14T21:22:30.6607583Z * [new tag] v1.11.0-rc5 -> v1.11.0-rc5 2025-08-14T21:22:30.6608188Z * [new tag] v1.11.0-rc6 -> v1.11.0-rc6 2025-08-14T21:22:30.6608834Z * [new tag] v1.11.0-rc7 -> v1.11.0-rc7 2025-08-14T21:22:30.6609614Z * [new tag] v1.12.0 -> v1.12.0 2025-08-14T21:22:30.6610360Z * [new tag] v1.12.0-rc1 -> v1.12.0-rc1 2025-08-14T21:22:30.6611098Z * [new tag] v1.12.0-rc2 -> v1.12.0-rc2 2025-08-14T21:22:30.6611893Z * [new tag] v1.12.0-rc3 -> v1.12.0-rc3 2025-08-14T21:22:30.6620735Z * [new tag] v1.12.0-rc4 -> v1.12.0-rc4 2025-08-14T21:22:30.6621156Z * [new tag] v1.12.0-rc5 -> v1.12.0-rc5 2025-08-14T21:22:30.6621455Z * [new tag] v1.12.0-rc6 -> v1.12.0-rc6 2025-08-14T21:22:30.6621862Z * [new tag] v1.12.0-rc7 -> v1.12.0-rc7 2025-08-14T21:22:30.6622246Z * [new tag] v1.12.0-rc8 -> v1.12.0-rc8 2025-08-14T21:22:30.6622663Z * [new tag] v1.12.1 -> v1.12.1 2025-08-14T21:22:30.6622958Z * [new tag] v1.12.1-rc1 -> v1.12.1-rc1 2025-08-14T21:22:30.6623357Z * [new tag] v1.12.1-rc2 -> v1.12.1-rc2 2025-08-14T21:22:30.6623653Z * [new tag] v1.12.1-rc3 -> v1.12.1-rc3 2025-08-14T21:22:30.6623939Z * [new tag] v1.12.1-rc4 -> v1.12.1-rc4 2025-08-14T21:22:30.6624344Z * [new tag] v1.12.1-rc5 -> v1.12.1-rc5 2025-08-14T21:22:30.6624712Z * [new tag] v1.13.0 -> v1.13.0 2025-08-14T21:22:30.6625031Z * [new tag] v1.13.0-rc1 -> v1.13.0-rc1 2025-08-14T21:22:30.6625436Z * [new tag] v1.13.0-rc2 -> v1.13.0-rc2 2025-08-14T21:22:30.6625734Z * [new tag] v1.13.0-rc3 -> v1.13.0-rc3 2025-08-14T21:22:30.6626138Z * [new tag] v1.13.0-rc4 -> v1.13.0-rc4 2025-08-14T21:22:30.6626430Z * [new tag] v1.13.0-rc5 -> v1.13.0-rc5 2025-08-14T21:22:30.6626877Z * [new tag] v1.13.0-rc6 -> v1.13.0-rc6 2025-08-14T21:22:30.6627271Z * [new tag] v1.13.1 -> v1.13.1 2025-08-14T21:22:30.6627566Z * [new tag] v1.13.1-rc1 -> v1.13.1-rc1 2025-08-14T21:22:30.6636191Z * [new tag] v1.2.0 -> v1.2.0 2025-08-14T21:22:30.6636850Z * [new tag] v1.2.0a0 -> v1.2.0a0 2025-08-14T21:22:30.6637642Z * [new tag] v1.3.0 -> v1.3.0 2025-08-14T21:22:30.6638397Z * [new tag] v1.3.0a0 -> v1.3.0a0 2025-08-14T21:22:30.6639088Z * [new tag] v1.3.1 -> v1.3.1 2025-08-14T21:22:30.6639869Z * [new tag] v1.4.0 -> v1.4.0 2025-08-14T21:22:30.6640564Z * [new tag] v1.4.0a0 -> v1.4.0a0 2025-08-14T21:22:30.6641257Z * [new tag] v1.4.1 -> v1.4.1 2025-08-14T21:22:30.6646024Z * [new tag] v1.5.0 -> v1.5.0 2025-08-14T21:22:30.6646333Z * [new tag] v1.5.0-rc1 -> v1.5.0-rc1 2025-08-14T21:22:30.6646722Z * [new tag] v1.5.0-rc2 -> v1.5.0-rc2 2025-08-14T21:22:30.6647045Z * [new tag] v1.5.0-rc3 -> v1.5.0-rc3 2025-08-14T21:22:30.6647565Z * [new tag] v1.5.0-rc4 -> v1.5.0-rc4 2025-08-14T21:22:30.6648229Z * [new tag] v1.5.0-rc5 -> v1.5.0-rc5 2025-08-14T21:22:30.6649778Z * [new tag] v1.5.1 -> v1.5.1 2025-08-14T21:22:30.6650402Z * [new tag] v1.5.1-rc1 -> v1.5.1-rc1 2025-08-14T21:22:30.6650956Z * [new tag] v1.6.0 -> v1.6.0 2025-08-14T21:22:30.6651721Z * [new tag] v1.6.0-rc1 -> v1.6.0-rc1 2025-08-14T21:22:30.6652560Z * [new tag] v1.6.0-rc2 -> v1.6.0-rc2 2025-08-14T21:22:30.6653322Z * [new tag] v1.6.0-rc3 -> v1.6.0-rc3 2025-08-14T21:22:30.6654147Z * [new tag] v1.6.0-rc4 -> v1.6.0-rc4 2025-08-14T21:22:30.6654914Z * [new tag] v1.6.0-rc5 -> v1.6.0-rc5 2025-08-14T21:22:30.6655711Z * [new tag] v1.6.0-rc6 -> v1.6.0-rc6 2025-08-14T21:22:30.6662170Z * [new tag] v1.6.0-rc7 -> v1.6.0-rc7 2025-08-14T21:22:30.6662566Z * [new tag] v1.7.0 -> v1.7.0 2025-08-14T21:22:30.6663055Z * [new tag] v1.7.0-rc1 -> v1.7.0-rc1 2025-08-14T21:22:30.6663372Z * [new tag] v1.7.0-rc2 -> v1.7.0-rc2 2025-08-14T21:22:30.6663679Z * [new tag] v1.7.0-rc3 -> v1.7.0-rc3 2025-08-14T21:22:30.6663974Z * [new tag] v1.7.0-rc4 -> v1.7.0-rc4 2025-08-14T21:22:30.6664275Z * [new tag] v1.7.1 -> v1.7.1 2025-08-14T21:22:30.6664570Z * [new tag] v1.7.1-rc1 -> v1.7.1-rc1 2025-08-14T21:22:30.6664863Z * [new tag] v1.7.1-rc2 -> v1.7.1-rc2 2025-08-14T21:22:30.6665165Z * [new tag] v1.7.1-rc3 -> v1.7.1-rc3 2025-08-14T21:22:30.6665468Z * [new tag] v1.8.0 -> v1.8.0 2025-08-14T21:22:30.6665759Z * [new tag] v1.8.0-rc1 -> v1.8.0-rc1 2025-08-14T21:22:30.6666046Z * [new tag] v1.8.0-rc2 -> v1.8.0-rc2 2025-08-14T21:22:30.6666349Z * [new tag] v1.8.0-rc3 -> v1.8.0-rc3 2025-08-14T21:22:30.6666922Z * [new tag] v1.8.0-rc4 -> v1.8.0-rc4 2025-08-14T21:22:30.6667516Z * [new tag] v1.8.0-rc5 -> v1.8.0-rc5 2025-08-14T21:22:30.6668081Z * [new tag] v1.8.1 -> v1.8.1 2025-08-14T21:22:30.6668836Z * [new tag] v1.8.1-rc1 -> v1.8.1-rc1 2025-08-14T21:22:30.6669500Z * [new tag] v1.8.1-rc2 -> v1.8.1-rc2 2025-08-14T21:22:30.6670129Z * [new tag] v1.8.1-rc3 -> v1.8.1-rc3 2025-08-14T21:22:30.6678988Z * [new tag] v1.8.2 -> v1.8.2 2025-08-14T21:22:30.6679295Z * [new tag] v1.8.2-rc1 -> v1.8.2-rc1 2025-08-14T21:22:30.6679709Z * [new tag] v1.9.0 -> v1.9.0 2025-08-14T21:22:30.6680014Z * [new tag] v1.9.0-rc1 -> v1.9.0-rc1 2025-08-14T21:22:30.6680542Z * [new tag] v1.9.0-rc2 -> v1.9.0-rc2 2025-08-14T21:22:30.6680857Z * [new tag] v1.9.0-rc3 -> v1.9.0-rc3 2025-08-14T21:22:30.6681334Z * [new tag] v1.9.0-rc4 -> v1.9.0-rc4 2025-08-14T21:22:30.6681618Z * [new tag] v1.9.1 -> v1.9.1 2025-08-14T21:22:30.6682227Z * [new tag] v1.9.1-rc1 -> v1.9.1-rc1 2025-08-14T21:22:30.6682855Z * [new tag] v1.9.1-rc2 -> v1.9.1-rc2 2025-08-14T21:22:30.6683702Z * [new tag] v2.0.0 -> v2.0.0 2025-08-14T21:22:30.6684458Z * [new tag] v2.0.0-rc1 -> v2.0.0-rc1 2025-08-14T21:22:30.6689152Z * [new tag] v2.0.0-rc2 -> v2.0.0-rc2 2025-08-14T21:22:30.6689462Z * [new tag] v2.0.0-rc3 -> v2.0.0-rc3 2025-08-14T21:22:30.6689795Z * [new tag] v2.0.0-rc4 -> v2.0.0-rc4 2025-08-14T21:22:30.6690148Z * [new tag] v2.0.0-rc5 -> v2.0.0-rc5 2025-08-14T21:22:30.6690553Z * [new tag] v2.0.0-rc6 -> v2.0.0-rc6 2025-08-14T21:22:30.6690840Z * [new tag] v2.0.1 -> v2.0.1 2025-08-14T21:22:30.6691231Z * [new tag] v2.0.1-rc1 -> v2.0.1-rc1 2025-08-14T21:22:30.6691519Z * [new tag] v2.0.1-rc2 -> v2.0.1-rc2 2025-08-14T21:22:30.6691915Z * [new tag] v2.0.1-rc3 -> v2.0.1-rc3 2025-08-14T21:22:30.6692203Z * [new tag] v2.0.1-rc4 -> v2.0.1-rc4 2025-08-14T21:22:30.6693271Z * [new tag] v2.1.0 -> v2.1.0 2025-08-14T21:22:30.6693859Z * [new tag] v2.1.0-rc1 -> v2.1.0-rc1 2025-08-14T21:22:30.6694663Z * [new tag] v2.1.0-rc2 -> v2.1.0-rc2 2025-08-14T21:22:30.6695461Z * [new tag] v2.1.0-rc3 -> v2.1.0-rc3 2025-08-14T21:22:30.6696243Z * [new tag] v2.1.0-rc4 -> v2.1.0-rc4 2025-08-14T21:22:30.6697058Z * [new tag] v2.1.0-rc5 -> v2.1.0-rc5 2025-08-14T21:22:30.6697732Z * [new tag] v2.1.0-rc6 -> v2.1.0-rc6 2025-08-14T21:22:30.6698467Z * [new tag] v2.1.1 -> v2.1.1 2025-08-14T21:22:30.6699272Z * [new tag] v2.1.1-rc1 -> v2.1.1-rc1 2025-08-14T21:22:30.6707978Z * [new tag] v2.1.1-rc2 -> v2.1.1-rc2 2025-08-14T21:22:30.6708368Z * [new tag] v2.1.1-rc3 -> v2.1.1-rc3 2025-08-14T21:22:30.6708722Z * [new tag] v2.1.1-rc4 -> v2.1.1-rc4 2025-08-14T21:22:30.6709039Z * [new tag] v2.1.1-rc5 -> v2.1.1-rc5 2025-08-14T21:22:30.6709343Z * [new tag] v2.1.1-rc6 -> v2.1.1-rc6 2025-08-14T21:22:30.6709635Z * [new tag] v2.1.2 -> v2.1.2 2025-08-14T21:22:30.6709922Z * [new tag] v2.1.2-rc1 -> v2.1.2-rc1 2025-08-14T21:22:30.6710215Z * [new tag] v2.1.2-rc2 -> v2.1.2-rc2 2025-08-14T21:22:30.6710513Z * [new tag] v2.1.2-rc3 -> v2.1.2-rc3 2025-08-14T21:22:30.6711069Z * [new tag] v2.2.0 -> v2.2.0 2025-08-14T21:22:30.6711867Z * [new tag] v2.2.0-rc1 -> v2.2.0-rc1 2025-08-14T21:22:30.6712625Z * [new tag] v2.2.0-rc2 -> v2.2.0-rc2 2025-08-14T21:22:30.6713383Z * [new tag] v2.2.0-rc3 -> v2.2.0-rc3 2025-08-14T21:22:30.6718052Z * [new tag] v2.2.0-rc4 -> v2.2.0-rc4 2025-08-14T21:22:30.6718442Z * [new tag] v2.2.0-rc5 -> v2.2.0-rc5 2025-08-14T21:22:30.6718773Z * [new tag] v2.2.0-rc6 -> v2.2.0-rc6 2025-08-14T21:22:30.6719098Z * [new tag] v2.2.0-rc7 -> v2.2.0-rc7 2025-08-14T21:22:30.6719418Z * [new tag] v2.2.0-rc8 -> v2.2.0-rc8 2025-08-14T21:22:30.6719737Z * [new tag] v2.2.1 -> v2.2.1 2025-08-14T21:22:30.6720068Z * [new tag] v2.2.1-rc1 -> v2.2.1-rc1 2025-08-14T21:22:30.6720393Z * [new tag] v2.2.1-rc2 -> v2.2.1-rc2 2025-08-14T21:22:30.6720690Z * [new tag] v2.2.1-rc3 -> v2.2.1-rc3 2025-08-14T21:22:30.6720973Z * [new tag] v2.2.2 -> v2.2.2 2025-08-14T21:22:30.6721365Z * [new tag] v2.2.2-rc1 -> v2.2.2-rc1 2025-08-14T21:22:30.6722019Z * [new tag] v2.2.2-rc2 -> v2.2.2-rc2 2025-08-14T21:22:30.6722603Z * [new tag] v2.2.2-rc3 -> v2.2.2-rc3 2025-08-14T21:22:30.6723419Z * [new tag] v2.3.0 -> v2.3.0 2025-08-14T21:22:30.6724150Z * [new tag] v2.3.0-rc1 -> v2.3.0-rc1 2025-08-14T21:22:30.6725105Z * [new tag] v2.3.0-rc10 -> v2.3.0-rc10 2025-08-14T21:22:30.6725871Z * [new tag] v2.3.0-rc11 -> v2.3.0-rc11 2025-08-14T21:22:30.6726563Z * [new tag] v2.3.0-rc12 -> v2.3.0-rc12 2025-08-14T21:22:30.6732961Z * [new tag] v2.3.0-rc2 -> v2.3.0-rc2 2025-08-14T21:22:30.6733278Z * [new tag] v2.3.0-rc3 -> v2.3.0-rc3 2025-08-14T21:22:30.6733631Z * [new tag] v2.3.0-rc4 -> v2.3.0-rc4 2025-08-14T21:22:30.6734402Z * [new tag] v2.3.0-rc5 -> v2.3.0-rc5 2025-08-14T21:22:30.6734958Z * [new tag] v2.3.0-rc6 -> v2.3.0-rc6 2025-08-14T21:22:30.6735742Z * [new tag] v2.3.0-rc7 -> v2.3.0-rc7 2025-08-14T21:22:30.6736495Z * [new tag] v2.3.0-rc8 -> v2.3.0-rc8 2025-08-14T21:22:30.6737149Z * [new tag] v2.3.0-rc9 -> v2.3.0-rc9 2025-08-14T21:22:30.6737762Z * [new tag] v2.3.1 -> v2.3.1 2025-08-14T21:22:30.6738440Z * [new tag] v2.3.1-rc1 -> v2.3.1-rc1 2025-08-14T21:22:30.6739237Z * [new tag] v2.3.1-rc2 -> v2.3.1-rc2 2025-08-14T21:22:30.6740006Z * [new tag] v2.3.1-rc3 -> v2.3.1-rc3 2025-08-14T21:22:30.6740810Z * [new tag] v2.4.0 -> v2.4.0 2025-08-14T21:22:30.6741611Z * [new tag] v2.4.0-rc1 -> v2.4.0-rc1 2025-08-14T21:22:30.6742351Z * [new tag] v2.4.0-rc2 -> v2.4.0-rc2 2025-08-14T21:22:30.6751662Z * [new tag] v2.4.0-rc3 -> v2.4.0-rc3 2025-08-14T21:22:30.6752057Z * [new tag] v2.4.0-rc4 -> v2.4.0-rc4 2025-08-14T21:22:30.6752437Z * [new tag] v2.4.0-rc5 -> v2.4.0-rc5 2025-08-14T21:22:30.6752812Z * [new tag] v2.4.0-rc6 -> v2.4.0-rc6 2025-08-14T21:22:30.6753199Z * [new tag] v2.4.0-rc7 -> v2.4.0-rc7 2025-08-14T21:22:30.6753574Z * [new tag] v2.4.0-rc8 -> v2.4.0-rc8 2025-08-14T21:22:30.6753937Z * [new tag] v2.4.0-rc9 -> v2.4.0-rc9 2025-08-14T21:22:30.6754223Z * [new tag] v2.4.1 -> v2.4.1 2025-08-14T21:22:30.6754526Z * [new tag] v2.4.1-rc1 -> v2.4.1-rc1 2025-08-14T21:22:30.6754956Z * [new tag] v2.4.1-rc2 -> v2.4.1-rc2 2025-08-14T21:22:30.6755242Z * [new tag] v2.4.1-rc3 -> v2.4.1-rc3 2025-08-14T21:22:30.6755547Z * [new tag] v2.5.0 -> v2.5.0 2025-08-14T21:22:30.6755843Z * [new tag] v2.5.0-rc1 -> v2.5.0-rc1 2025-08-14T21:22:30.6756158Z * [new tag] v2.5.0-rc10 -> v2.5.0-rc10 2025-08-14T21:22:30.6756455Z * [new tag] v2.5.0-rc2 -> v2.5.0-rc2 2025-08-14T21:22:30.6756756Z * [new tag] v2.5.0-rc3 -> v2.5.0-rc3 2025-08-14T21:22:30.6757108Z * [new tag] v2.5.0-rc4 -> v2.5.0-rc4 2025-08-14T21:22:30.6759500Z * [new tag] v2.5.0-rc5 -> v2.5.0-rc5 2025-08-14T21:22:30.6766188Z * [new tag] v2.5.0-rc6 -> v2.5.0-rc6 2025-08-14T21:22:30.6766814Z * [new tag] v2.5.0-rc7 -> v2.5.0-rc7 2025-08-14T21:22:30.6767639Z * [new tag] v2.5.0-rc8 -> v2.5.0-rc8 2025-08-14T21:22:30.6768477Z * [new tag] v2.5.0-rc9 -> v2.5.0-rc9 2025-08-14T21:22:30.6769107Z * [new tag] v2.5.1 -> v2.5.1 2025-08-14T21:22:30.6769740Z * [new tag] v2.5.1-rc1 -> v2.5.1-rc1 2025-08-14T21:22:30.6770315Z * [new tag] v2.6.0 -> v2.6.0 2025-08-14T21:22:30.6771104Z * [new tag] v2.6.0-rc1 -> v2.6.0-rc1 2025-08-14T21:22:30.6780125Z * [new tag] v2.6.0-rc2 -> v2.6.0-rc2 2025-08-14T21:22:30.6780506Z * [new tag] v2.6.0-rc3 -> v2.6.0-rc3 2025-08-14T21:22:30.6780889Z * [new tag] v2.6.0-rc4 -> v2.6.0-rc4 2025-08-14T21:22:30.6781349Z * [new tag] v2.6.0-rc5 -> v2.6.0-rc5 2025-08-14T21:22:30.6781655Z * [new tag] v2.6.0-rc6 -> v2.6.0-rc6 2025-08-14T21:22:30.6781945Z * [new tag] v2.6.0-rc7 -> v2.6.0-rc7 2025-08-14T21:22:30.6782243Z * [new tag] v2.6.0-rc8 -> v2.6.0-rc8 2025-08-14T21:22:30.6782538Z * [new tag] v2.6.0-rc9 -> v2.6.0-rc9 2025-08-14T21:22:30.6782819Z * [new tag] v2.7.0 -> v2.7.0 2025-08-14T21:22:30.6783115Z * [new tag] v2.7.0-rc1 -> v2.7.0-rc1 2025-08-14T21:22:30.6783423Z * [new tag] v2.7.0-rc10 -> v2.7.0-rc10 2025-08-14T21:22:30.6783722Z * [new tag] v2.7.0-rc2 -> v2.7.0-rc2 2025-08-14T21:22:30.6784013Z * [new tag] v2.7.0-rc3 -> v2.7.0-rc3 2025-08-14T21:22:30.6784318Z * [new tag] v2.7.0-rc4 -> v2.7.0-rc4 2025-08-14T21:22:30.6784642Z * [new tag] v2.7.0-rc5 -> v2.7.0-rc5 2025-08-14T21:22:30.6785415Z * [new tag] v2.7.0-rc6 -> v2.7.0-rc6 2025-08-14T21:22:30.6786231Z * [new tag] v2.7.0-rc7 -> v2.7.0-rc7 2025-08-14T21:22:30.6790792Z * [new tag] v2.7.0-rc8 -> v2.7.0-rc8 2025-08-14T21:22:30.6791089Z * [new tag] v2.7.0-rc9 -> v2.7.0-rc9 2025-08-14T21:22:30.6791383Z * [new tag] v2.7.1 -> v2.7.1 2025-08-14T21:22:30.6791669Z * [new tag] v2.7.1-rc1 -> v2.7.1-rc1 2025-08-14T21:22:30.6791968Z * [new tag] v2.7.1-rc2 -> v2.7.1-rc2 2025-08-14T21:22:30.6792266Z * [new tag] v2.7.1-rc3 -> v2.7.1-rc3 2025-08-14T21:22:30.6792559Z * [new tag] v2.7.1-rc4 -> v2.7.1-rc4 2025-08-14T21:22:30.6793019Z * [new tag] v2.7.1-rc5 -> v2.7.1-rc5 2025-08-14T21:22:30.6793660Z * [new tag] v2.8.0 -> v2.8.0 2025-08-14T21:22:30.6794906Z * [new tag] v2.8.0-rc1 -> v2.8.0-rc1 2025-08-14T21:22:30.6795593Z * [new tag] v2.8.0-rc2 -> v2.8.0-rc2 2025-08-14T21:22:30.6796464Z * [new tag] v2.8.0-rc3 -> v2.8.0-rc3 2025-08-14T21:22:30.6797260Z * [new tag] v2.8.0-rc4 -> v2.8.0-rc4 2025-08-14T21:22:30.6798117Z * [new tag] v2.8.0-rc5 -> v2.8.0-rc5 2025-08-14T21:22:30.6798941Z * [new tag] v2.8.0-rc6 -> v2.8.0-rc6 2025-08-14T21:22:30.6799756Z * [new tag] v2.8.0-rc7 -> v2.8.0-rc7 2025-08-14T21:22:30.6800615Z * [new tag] v2.8.0-rc8 -> v2.8.0-rc8 2025-08-14T21:22:30.6808463Z * [new tag] whc_flight_1 -> whc_flight_1 2025-08-14T21:22:30.6808988Z * [new tag] whc_flight_2 -> whc_flight_2 2025-08-14T21:22:30.6809785Z * [new tag] whc_flight_4 -> whc_flight_4 2025-08-14T21:22:30.7588636Z [command]/usr/bin/git rev-parse --verify --quiet 1fc683cf17c8c673044538d10266c00f92987be2^{object} 2025-08-14T21:22:30.7610255Z 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:22:30.7617418Z ##[endgroup] 2025-08-14T21:22:30.7617784Z ##[group]Determining the checkout info 2025-08-14T21:22:30.7618188Z ##[endgroup] 2025-08-14T21:22:30.7620337Z [command]/usr/bin/git sparse-checkout disable 2025-08-14T21:22:30.7659055Z [command]/usr/bin/git config --local --unset-all extensions.worktreeConfig 2025-08-14T21:22:30.7684397Z ##[group]Checking out the ref 2025-08-14T21:22:30.7688343Z [command]/usr/bin/git checkout --progress --force 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:22:31.7959204Z Updating files: 79% (15553/19474) 2025-08-14T21:22:31.8265899Z Updating files: 80% (15580/19474) 2025-08-14T21:22:31.8583234Z Updating files: 81% (15774/19474) 2025-08-14T21:22:31.8811338Z Updating files: 82% (15969/19474) 2025-08-14T21:22:31.8964998Z Updating files: 83% (16164/19474) 2025-08-14T21:22:31.9095785Z Updating files: 84% (16359/19474) 2025-08-14T21:22:31.9263640Z Updating files: 85% (16553/19474) 2025-08-14T21:22:31.9425198Z Updating files: 86% (16748/19474) 2025-08-14T21:22:31.9570059Z Updating files: 87% (16943/19474) 2025-08-14T21:22:31.9699892Z Updating files: 88% (17138/19474) 2025-08-14T21:22:31.9831779Z Updating files: 89% (17332/19474) 2025-08-14T21:22:32.0020989Z Updating files: 90% (17527/19474) 2025-08-14T21:22:32.0159987Z Updating files: 91% (17722/19474) 2025-08-14T21:22:32.0295321Z Updating files: 92% (17917/19474) 2025-08-14T21:22:32.0498537Z Updating files: 93% (18111/19474) 2025-08-14T21:22:32.0711079Z Updating files: 94% (18306/19474) 2025-08-14T21:22:32.0905297Z Updating files: 95% (18501/19474) 2025-08-14T21:22:32.1092024Z Updating files: 96% (18696/19474) 2025-08-14T21:22:32.1269759Z Updating files: 97% (18890/19474) 2025-08-14T21:22:32.1575898Z Updating files: 98% (19085/19474) 2025-08-14T21:22:32.1742282Z Updating files: 99% (19280/19474) 2025-08-14T21:22:32.1742573Z Updating files: 100% (19474/19474) 2025-08-14T21:22:32.1742859Z Updating files: 100% (19474/19474), done. 2025-08-14T21:22:32.2054016Z Note: switching to '1fc683cf17c8c673044538d10266c00f92987be2'. 2025-08-14T21:22:32.2054325Z 2025-08-14T21:22:32.2054548Z You are in 'detached HEAD' state. You can look around, make experimental 2025-08-14T21:22:32.2059171Z changes and commit them, and you can discard any commits you make in this 2025-08-14T21:22:32.2059624Z state without impacting any branches by switching back to a branch. 2025-08-14T21:22:32.2059870Z 2025-08-14T21:22:32.2060048Z If you want to create a new branch to retain commits you create, you may 2025-08-14T21:22:32.2060447Z do so (now or later) by using -c with the switch command. Example: 2025-08-14T21:22:32.2060849Z 2025-08-14T21:22:32.2060949Z git switch -c 2025-08-14T21:22:32.2061118Z 2025-08-14T21:22:32.2061213Z Or undo this operation with: 2025-08-14T21:22:32.2061357Z 2025-08-14T21:22:32.2061460Z git switch - 2025-08-14T21:22:32.2061576Z 2025-08-14T21:22:32.2061766Z Turn off this advice by setting config variable advice.detachedHead to false 2025-08-14T21:22:32.2062028Z 2025-08-14T21:22:32.2062345Z HEAD is now at 1fc683cf17c [Inductor] Allow indexing a flexible layout for extract_input_node_reduction_ranges (#160645) 2025-08-14T21:22:32.2124861Z ##[endgroup] 2025-08-14T21:22:32.2125248Z ##[group]Setting up auth for fetching submodules 2025-08-14T21:22:32.2132224Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-08-14T21:22:32.2177596Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf 2025-08-14T21:22:32.2204983Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf git@github.com: 2025-08-14T21:22:32.2237662Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf org-21003710@github.com: 2025-08-14T21:22:32.2256115Z ##[endgroup] 2025-08-14T21:22:32.2256653Z ##[group]Fetching submodules 2025-08-14T21:22:32.2264545Z [command]/usr/bin/git submodule sync --recursive 2025-08-14T21:22:32.2604968Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --recursive 2025-08-14T21:22:32.2961741Z Submodule 'android/libs/fbjni' (https://github.com/facebookincubator/fbjni.git) registered for path 'android/libs/fbjni' 2025-08-14T21:22:32.2963847Z Submodule 'third_party/NNPACK_deps/FP16' (https://github.com/Maratyszcza/FP16.git) registered for path 'third_party/FP16' 2025-08-14T21:22:32.2966804Z Submodule 'third_party/NNPACK_deps/FXdiv' (https://github.com/Maratyszcza/FXdiv.git) registered for path 'third_party/FXdiv' 2025-08-14T21:22:32.2977102Z Submodule 'third_party/NNPACK' (https://github.com/Maratyszcza/NNPACK.git) registered for path 'third_party/NNPACK' 2025-08-14T21:22:32.2977768Z Submodule 'third_party/NVTX' (https://github.com/NVIDIA/NVTX.git) registered for path 'third_party/NVTX' 2025-08-14T21:22:32.2980195Z Submodule 'third_party/VulkanMemoryAllocator' (https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator.git) registered for path 'third_party/VulkanMemoryAllocator' 2025-08-14T21:22:32.2983083Z Submodule 'third_party/XNNPACK' (https://github.com/google/XNNPACK.git) registered for path 'third_party/XNNPACK' 2025-08-14T21:22:32.2987782Z Submodule 'third_party/aiter' (https://github.com/ROCm/aiter.git) registered for path 'third_party/aiter' 2025-08-14T21:22:32.2989631Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/benchmark' 2025-08-14T21:22:32.2992894Z Submodule 'third_party/composable_kernel' (https://github.com/ROCm/composable_kernel.git) registered for path 'third_party/composable_kernel' 2025-08-14T21:22:32.2996170Z Submodule 'third_party/cpp-httplib' (https://github.com/yhirose/cpp-httplib.git) registered for path 'third_party/cpp-httplib' 2025-08-14T21:22:32.3004154Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo.git) registered for path 'third_party/cpuinfo' 2025-08-14T21:22:32.3007822Z Submodule 'third_party/cudnn_frontend' (https://github.com/NVIDIA/cudnn-frontend.git) registered for path 'third_party/cudnn_frontend' 2025-08-14T21:22:32.3011159Z Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/cutlass' 2025-08-14T21:22:32.3020517Z Submodule 'third_party/fbgemm' (https://github.com/pytorch/fbgemm) registered for path 'third_party/fbgemm' 2025-08-14T21:22:32.3021511Z Submodule 'third_party/flash-attention' (https://github.com/Dao-AILab/flash-attention.git) registered for path 'third_party/flash-attention' 2025-08-14T21:22:32.3024277Z Submodule 'third_party/flatbuffers' (https://github.com/google/flatbuffers.git) registered for path 'third_party/flatbuffers' 2025-08-14T21:22:32.3037359Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/fmt' 2025-08-14T21:22:32.3049648Z Submodule 'third_party/gemmlowp/gemmlowp' (https://github.com/google/gemmlowp.git) registered for path 'third_party/gemmlowp/gemmlowp' 2025-08-14T21:22:32.3050413Z Submodule 'third_party/gloo' (https://github.com/pytorch/gloo) registered for path 'third_party/gloo' 2025-08-14T21:22:32.3052563Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/googletest' 2025-08-14T21:22:32.3061984Z Submodule 'third_party/ideep' (https://github.com/intel/ideep) registered for path 'third_party/ideep' 2025-08-14T21:22:32.3062801Z Submodule 'third_party/ittapi' (https://github.com/intel/ittapi.git) registered for path 'third_party/ittapi' 2025-08-14T21:22:32.3065737Z Submodule 'third_party/kineto' (https://github.com/pytorch/kineto) registered for path 'third_party/kineto' 2025-08-14T21:22:32.3074184Z Submodule 'third_party/kleidiai' (https://github.com/ARM-software/kleidiai.git) registered for path 'third_party/kleidiai' 2025-08-14T21:22:32.3078965Z Submodule 'third_party/mimalloc' (https://github.com/microsoft/mimalloc.git) registered for path 'third_party/mimalloc' 2025-08-14T21:22:32.3083548Z Submodule 'third_party/nlohmann' (https://github.com/nlohmann/json.git) registered for path 'third_party/nlohmann' 2025-08-14T21:22:32.3088915Z Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx' 2025-08-14T21:22:32.3093309Z Submodule 'third_party/opentelemetry-cpp' (https://github.com/open-telemetry/opentelemetry-cpp.git) registered for path 'third_party/opentelemetry-cpp' 2025-08-14T21:22:32.3098042Z Submodule 'third_party/pocketfft' (https://github.com/mreineck/pocketfft) registered for path 'third_party/pocketfft' 2025-08-14T21:22:32.3107819Z Submodule 'third_party/protobuf' (https://github.com/protocolbuffers/protobuf.git) registered for path 'third_party/protobuf' 2025-08-14T21:22:32.3112575Z Submodule 'third_party/NNPACK_deps/psimd' (https://github.com/Maratyszcza/psimd.git) registered for path 'third_party/psimd' 2025-08-14T21:22:32.3118022Z Submodule 'third_party/NNPACK_deps/pthreadpool' (https://github.com/Maratyszcza/pthreadpool.git) registered for path 'third_party/pthreadpool' 2025-08-14T21:22:32.3124372Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/pybind11' 2025-08-14T21:22:32.3136598Z Submodule 'third_party/python-peachpy' (https://github.com/malfet/PeachPy.git) registered for path 'third_party/python-peachpy' 2025-08-14T21:22:32.3139499Z Submodule 'third_party/sleef' (https://github.com/shibatch/sleef) registered for path 'third_party/sleef' 2025-08-14T21:22:32.3146903Z Submodule 'third_party/tensorpipe' (https://github.com/pytorch/tensorpipe.git) registered for path 'third_party/tensorpipe' 2025-08-14T21:22:32.3181195Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/android/libs/fbjni'... 2025-08-14T21:22:32.6392232Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FXdiv'... 2025-08-14T21:22:32.6397247Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/psimd'... 2025-08-14T21:22:32.6398215Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FP16'... 2025-08-14T21:22:32.6399059Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pthreadpool'... 2025-08-14T21:22:32.6399591Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pocketfft'... 2025-08-14T21:22:32.6400568Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/NNPACK'... 2025-08-14T21:22:32.6895337Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/NVTX'... 2025-08-14T21:22:32.8820834Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/python-peachpy'... 2025-08-14T21:22:32.8821568Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep'... 2025-08-14T21:22:32.8822378Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gloo'... 2025-08-14T21:22:32.8823020Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gemmlowp/gemmlowp'... 2025-08-14T21:22:32.8823735Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/benchmark'... 2025-08-14T21:22:32.8824254Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kleidiai'... 2025-08-14T21:22:32.8824977Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ittapi'... 2025-08-14T21:22:32.8825951Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe'... 2025-08-14T21:22:32.9822451Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/VulkanMemoryAllocator'... 2025-08-14T21:22:34.5244815Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cpp-httplib'... 2025-08-14T21:22:34.5246415Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flash-attention'... 2025-08-14T21:22:34.5247555Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cpuinfo'... 2025-08-14T21:22:34.5248644Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/mimalloc'... 2025-08-14T21:22:34.5250251Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/sleef'... 2025-08-14T21:22:34.5251328Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/googletest'... 2025-08-14T21:22:34.5252412Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pybind11'... 2025-08-14T21:22:34.5253536Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cudnn_frontend'... 2025-08-14T21:22:34.5254612Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto'... 2025-08-14T21:22:34.5259109Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fmt'... 2025-08-14T21:22:34.5295137Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/XNNPACK'... 2025-08-14T21:22:45.9204262Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flatbuffers'... 2025-08-14T21:22:45.9204978Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cutlass'... 2025-08-14T21:22:45.9205488Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm'... 2025-08-14T21:22:45.9205983Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx'... 2025-08-14T21:22:45.9206519Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/composable_kernel'... 2025-08-14T21:22:45.9207049Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/aiter'... 2025-08-14T21:22:45.9207616Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp'... 2025-08-14T21:22:45.9208175Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/nlohmann'... 2025-08-14T21:22:45.9208687Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf'... 2025-08-14T21:22:45.9381362Z Submodule path 'android/libs/fbjni': checked out '7e1e1fe3858c63c251c637ae41a20de425dde96f' 2025-08-14T21:22:45.9525465Z Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' 2025-08-14T21:22:45.9628747Z Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' 2025-08-14T21:22:45.9903396Z Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' 2025-08-14T21:22:46.0785325Z Submodule path 'third_party/NVTX': checked out '2942f167cc30c5e3a44a2aecd5b0d9c07ff61a07' 2025-08-14T21:22:46.1331798Z Submodule path 'third_party/VulkanMemoryAllocator': checked out '1d8f600fd424278486eade7ed3e877c99f0846b1' 2025-08-14T21:22:46.9351010Z Submodule path 'third_party/XNNPACK': checked out '51a0103656eff6fc9bfd39a4597923c4b542c883' 2025-08-14T21:22:47.1037538Z Submodule path 'third_party/aiter': checked out '01aae101b9e5e94d6c16a9514c9fb8df99c93150' 2025-08-14T21:22:47.1056777Z Submodule '3rdparty/composable_kernel' (https://github.com/ROCm/composable_kernel.git) registered for path 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T21:22:47.1093781Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/aiter/3rdparty/composable_kernel'... 2025-08-14T21:22:50.4573059Z Submodule path 'third_party/aiter/3rdparty/composable_kernel': checked out 'cffe8fa2a442ac8e80dd236a1a5d24fe3d7e0cbf' 2025-08-14T21:22:50.4850601Z Submodule path 'third_party/benchmark': checked out '299e5928955cc62af9968370293b916f5130916f' 2025-08-14T21:22:50.8463158Z Submodule path 'third_party/composable_kernel': checked out '7fe50dc3da2069d6645d9deb8c017a876472a977' 2025-08-14T21:22:50.9114817Z Submodule path 'third_party/cpp-httplib': checked out '3af7f2c16147f3fbc6e4d717032daf505dc1652c' 2025-08-14T21:22:51.0231177Z Submodule path 'third_party/cpuinfo': checked out '5e3d2445e6a84d9599bee2bf78edbb4d80865e1d' 2025-08-14T21:22:51.0734058Z Submodule path 'third_party/cudnn_frontend': checked out 'f937055efc6d414d11f4c6577e3977fe74f35fb6' 2025-08-14T21:22:51.7946565Z Submodule path 'third_party/cutlass': checked out 'e51efbfe18fe4f4cbb66ab814c55bf4aa0185491' 2025-08-14T21:22:51.9511879Z Submodule path 'third_party/fbgemm': checked out '21c7d30c526c0f1ad873ecc632dca6cfa8a69067' 2025-08-14T21:22:51.9528587Z Submodule 'external/asmjit' (https://github.com/asmjit/asmjit.git) registered for path 'third_party/fbgemm/external/asmjit' 2025-08-14T21:22:51.9537696Z Submodule 'external/composable_kernel' (https://github.com/jwfromm/composable_kernel.git) registered for path 'third_party/fbgemm/external/composable_kernel' 2025-08-14T21:22:51.9539596Z Submodule 'external/cpuinfo' (https://github.com/pytorch/cpuinfo) registered for path 'third_party/fbgemm/external/cpuinfo' 2025-08-14T21:22:51.9541073Z Submodule 'external/cutlass' (https://github.com/jwfromm/cutlass) registered for path 'third_party/fbgemm/external/cutlass' 2025-08-14T21:22:51.9542612Z Submodule 'external/googletest' (https://github.com/google/googletest) registered for path 'third_party/fbgemm/external/googletest' 2025-08-14T21:22:51.9544336Z Submodule 'external/hipify_torch' (https://github.com/ROCmSoftwarePlatform/hipify_torch.git) registered for path 'third_party/fbgemm/external/hipify_torch' 2025-08-14T21:22:51.9552130Z Submodule 'external/json' (https://github.com/nlohmann/json.git) registered for path 'third_party/fbgemm/external/json' 2025-08-14T21:22:51.9583667Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/asmjit'... 2025-08-14T21:22:53.2027939Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/hipify_torch'... 2025-08-14T21:22:53.2028714Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/cpuinfo'... 2025-08-14T21:22:53.2033635Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/googletest'... 2025-08-14T21:22:53.2034281Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/composable_kernel'... 2025-08-14T21:22:53.3029452Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/cutlass'... 2025-08-14T21:22:54.1726205Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/json'... 2025-08-14T21:22:58.8760971Z Submodule path 'third_party/fbgemm/external/asmjit': checked out 'a3199e8857792cd10b7589ff5d58343d2c9008ea' 2025-08-14T21:22:59.1688856Z Submodule path 'third_party/fbgemm/external/composable_kernel': checked out 'b1281b8b08d973a7064f864f47eeb30f3e2596e9' 2025-08-14T21:22:59.2840318Z Submodule path 'third_party/fbgemm/external/cpuinfo': checked out '6543fec09b2f04ac4a666882998b534afc9c1349' 2025-08-14T21:22:59.9894444Z Submodule path 'third_party/fbgemm/external/cutlass': checked out 'b40777404c174b9694a870bff5c13ce6b7f656ad' 2025-08-14T21:23:00.0423355Z Submodule path 'third_party/fbgemm/external/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-08-14T21:23:00.0580446Z Submodule path 'third_party/fbgemm/external/hipify_torch': checked out 'a4337c69fe0e2552a7b7b0669178926beeed828c' 2025-08-14T21:23:00.1772359Z Submodule path 'third_party/fbgemm/external/json': checked out '9cca280a4d0ccf0c08f47a99aa71d1b0e52f8d03' 2025-08-14T21:23:00.2579301Z Submodule path 'third_party/flash-attention': checked out '979702c87a8713a8e0a5e9fee122b90d2ef13be5' 2025-08-14T21:23:00.2598849Z Submodule 'csrc/composable_kernel' (https://github.com/ROCm/composable_kernel.git) registered for path 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T21:23:00.2600383Z Submodule 'csrc/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/flash-attention/csrc/cutlass' 2025-08-14T21:23:00.2639159Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flash-attention/csrc/composable_kernel'... 2025-08-14T21:23:03.2930990Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flash-attention/csrc/cutlass'... 2025-08-14T21:23:03.5686913Z Submodule path 'third_party/flash-attention/csrc/composable_kernel': checked out '888317e698e9803c62bd38568abc9e05d7709f33' 2025-08-14T21:23:04.2210763Z Submodule path 'third_party/flash-attention/csrc/cutlass': checked out 'c506e16788cb08416a4a57e11a9067beeee29420' 2025-08-14T21:23:04.3771038Z Submodule path 'third_party/flatbuffers': checked out 'a2cd1ea3b6d3fee220106b5fed3f7ce8da9eb757' 2025-08-14T21:23:04.4166906Z Submodule path 'third_party/fmt': checked out '40626af88bd7df9a5fb80be7b25ac85b122d6c21' 2025-08-14T21:23:04.4604242Z Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' 2025-08-14T21:23:04.4895474Z Submodule path 'third_party/gloo': checked out 'c7b7b022c124d9643957d9bd55f57ac59fce8fa2' 2025-08-14T21:23:04.5418421Z Submodule path 'third_party/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-08-14T21:23:04.5575845Z Submodule path 'third_party/ideep': checked out '719d8e6cd7f7a0e01b155657526d693acf97c2b3' 2025-08-14T21:23:04.5598193Z Submodule 'mkl-dnn' (https://github.com/intel/mkl-dnn.git) registered for path 'third_party/ideep/mkl-dnn' 2025-08-14T21:23:04.5621253Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep/mkl-dnn'... 2025-08-14T21:23:15.6572420Z Submodule path 'third_party/ideep/mkl-dnn': checked out '8d263e693366ef8db40acc569cc7d8edf644556d' 2025-08-14T21:23:15.6818167Z Submodule path 'third_party/ittapi': checked out 'dec1d23ca65ab069d225dfe40dea14f455170959' 2025-08-14T21:23:15.7851253Z Submodule path 'third_party/kineto': checked out '5e7501833f1021ce6f618572d3baf657b6319658' 2025-08-14T21:23:15.7873902Z Submodule 'libkineto/third_party/dynolog' (https://github.com/facebookincubator/dynolog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T21:23:15.7875317Z Submodule 'libkineto/third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T21:23:15.7877771Z Submodule 'libkineto/third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T21:23:15.7919302Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog'... 2025-08-14T21:23:16.4839624Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/fmt'... 2025-08-14T21:23:17.3376962Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/googletest'... 2025-08-14T21:23:17.4324086Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog': checked out '7d04a0053a845370ae06ce317a22a48e9edcc74e' 2025-08-14T21:23:17.4346290Z Submodule 'third_party/DCGM' (https://github.com/NVIDIA/DCGM.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T21:23:17.4348078Z Submodule 'third_party/cpr' (https://github.com/libcpr/cpr.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T21:23:17.4351141Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T21:23:17.4362046Z Submodule 'third_party/gflags' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T21:23:17.4363162Z Submodule 'third_party/glog' (https://github.com/google/glog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T21:23:17.4364083Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T21:23:17.4366436Z Submodule 'third_party/json' (https://github.com/nlohmann/json.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T21:23:17.4376645Z Submodule 'third_party/pfs' (https://github.com/dtrugman/pfs.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T21:23:17.4402181Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM'... 2025-08-14T21:23:18.7429343Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/pfs'... 2025-08-14T21:23:18.7430434Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/cpr'... 2025-08-14T21:23:18.7431494Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags'... 2025-08-14T21:23:18.7432447Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/glog'... 2025-08-14T21:23:18.7433336Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/googletest'... 2025-08-14T21:23:18.7434175Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/fmt'... 2025-08-14T21:23:18.8429838Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/json'... 2025-08-14T21:23:26.1047326Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM': checked out 'ffde4e54bc7249a6039a5e6b45b395141e1217f9' 2025-08-14T21:23:26.1249745Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr': checked out '871ed52d350214a034f6ef8a3b8f51c5ce1bd400' 2025-08-14T21:23:26.1656546Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt': checked out 'cd4af11efc9c622896a3e4cb599fa28668ca3d05' 2025-08-14T21:23:26.1801158Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags': checked out 'e171aa2d15ed9eb17054558e0b3a6a413bb01067' 2025-08-14T21:23:26.1826566Z Submodule 'doc' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T21:23:26.1851883Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc'... 2025-08-14T21:23:26.5046716Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc': checked out '8411df715cf522606e3b1aca386ddfc0b63d34b4' 2025-08-14T21:23:26.5251504Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog': checked out 'b33e3bad4c46c8a6345525fd822af355e5ef9446' 2025-08-14T21:23:26.5715886Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest': checked out '58d77fa8070e8cec2dc1ed015d66b454c8d78850' 2025-08-14T21:23:26.6839722Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json': checked out '4f8fba14066156b73f1189a2b8bd568bde5284c5' 2025-08-14T21:23:26.7042268Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs': checked out 'f68a2fa8ea36c783bdd760371411fcb495aa3150' 2025-08-14T21:23:26.7436274Z Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out '0041a40c1350ba702d475b9c4ad62da77caea164' 2025-08-14T21:23:26.8120923Z Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '7aca84427f224eeed3144123d5230d5871e93347' 2025-08-14T21:23:26.8614733Z Submodule path 'third_party/kleidiai': checked out 'cca02c2f69dd18e1f12647c1c0bdc8cf90e680c7' 2025-08-14T21:23:26.9047550Z Submodule path 'third_party/mimalloc': checked out 'fbd8b99c2b828428947d70fdc046bb55609be93e' 2025-08-14T21:23:27.0310826Z Submodule path 'third_party/nlohmann': checked out '55f93686c01528224f448c19128836e7df245f72' 2025-08-14T21:23:27.4816203Z Submodule path 'third_party/onnx': checked out 'e709452ef2bbc1d113faf678c24e6d3467696e83' 2025-08-14T21:23:27.4861554Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx/third_party/pybind11' 2025-08-14T21:23:27.4890671Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx/third_party/pybind11'... 2025-08-14T21:23:28.5204906Z Submodule path 'third_party/onnx/third_party/pybind11': checked out 'a2e59f0e7065404b44dfe92a28aca47ba1378dc4' 2025-08-14T21:23:28.5998902Z Submodule path 'third_party/opentelemetry-cpp': checked out 'a799f4aed9c94b765dcdaabaeab7d5e7e2310878' 2025-08-14T21:23:28.6017826Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark) registered for path 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T21:23:28.6023854Z Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T21:23:28.6026133Z Submodule 'third_party/ms-gsl' (https://github.com/microsoft/GSL) registered for path 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T21:23:28.6028594Z Submodule 'third_party/nlohmann-json' (https://github.com/nlohmann/json) registered for path 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T21:23:28.6031250Z Submodule 'third_party/opentelemetry-proto' (https://github.com/open-telemetry/opentelemetry-proto) registered for path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T21:23:28.6036764Z Submodule 'third_party/opentracing-cpp' (https://github.com/opentracing/opentracing-cpp.git) registered for path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T21:23:28.6037731Z Submodule 'third_party/prometheus-cpp' (https://github.com/jupp0r/prometheus-cpp) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T21:23:28.6041881Z Submodule 'tools/vcpkg' (https://github.com/Microsoft/vcpkg) registered for path 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T21:23:28.6067688Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/benchmark'... 2025-08-14T21:23:29.0116861Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentracing-cpp'... 2025-08-14T21:23:29.0117692Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentelemetry-proto'... 2025-08-14T21:23:29.0118465Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp'... 2025-08-14T21:23:29.0119403Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/ms-gsl'... 2025-08-14T21:23:29.1117918Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/googletest'... 2025-08-14T21:23:29.6783749Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/nlohmann-json'... 2025-08-14T21:23:36.6272680Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/tools/vcpkg'... 2025-08-14T21:23:36.7737448Z Submodule path 'third_party/opentelemetry-cpp/third_party/benchmark': checked out 'd572f4777349d43653b21d6c2fc63020ab326db2' 2025-08-14T21:23:36.8198381Z Submodule path 'third_party/opentelemetry-cpp/third_party/googletest': checked out 'b796f7d44681514f58a683a3a71ff17c94edb0c1' 2025-08-14T21:23:36.8390136Z Submodule path 'third_party/opentelemetry-cpp/third_party/ms-gsl': checked out '6f4529395c5b7c2d661812257cd6780c67e54afa' 2025-08-14T21:23:36.9575831Z Submodule path 'third_party/opentelemetry-cpp/third_party/nlohmann-json': checked out 'bc889afb4c5bf1c0d8ee29ef35eaaf4c8bef8a5d' 2025-08-14T21:23:36.9738582Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto': checked out '4ca4f0335c63cda7ab31ea7ed70d6553aee14dce' 2025-08-14T21:23:36.9914932Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp': checked out '06b57f48ded1fa3bdd3d4346f6ef29e40e08eaf5' 2025-08-14T21:23:37.0099044Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp': checked out 'c9ffcdda9086ffd9e1283ea7a0276d831f3c8a8d' 2025-08-14T21:23:37.0126202Z Submodule 'civetweb' (https://github.com/civetweb/civetweb.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T21:23:37.0127757Z Submodule 'googletest' (https://github.com/google/googletest.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T21:23:37.0152256Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb'... 2025-08-14T21:23:38.7937734Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest'... 2025-08-14T21:23:39.0729378Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb': checked out 'eefb26f82b233268fc98577d265352720d477ba4' 2025-08-14T21:23:39.1253939Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2025-08-14T21:23:39.6507664Z Submodule path 'third_party/opentelemetry-cpp/tools/vcpkg': checked out '8eb57355a4ffb410a2e94c07b4dca2dffbee8e50' 2025-08-14T21:23:39.6638105Z Submodule path 'third_party/pocketfft': checked out '0fa0ef591e38c2758e3184c6c23e497b9f732ffa' 2025-08-14T21:23:39.9707582Z Submodule path 'third_party/protobuf': checked out 'd1eca4e4b421cd2997495c4b4e65cea6be4e9b8a' 2025-08-14T21:23:39.9739465Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/protobuf/third_party/benchmark' 2025-08-14T21:23:39.9741176Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/protobuf/third_party/googletest' 2025-08-14T21:23:39.9766799Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/benchmark'... 2025-08-14T21:23:40.5741309Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/googletest'... 2025-08-14T21:23:40.9206191Z Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' 2025-08-14T21:23:41.0033980Z Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' 2025-08-14T21:23:41.0134932Z Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' 2025-08-14T21:23:41.0275321Z Submodule path 'third_party/pthreadpool': checked out '4fe0e1e183925bf8cfa6aae24237e724a96479b8' 2025-08-14T21:23:41.0696899Z Submodule path 'third_party/pybind11': checked out 'a2e59f0e7065404b44dfe92a28aca47ba1378dc4' 2025-08-14T21:23:41.1035579Z Submodule path 'third_party/python-peachpy': checked out 'f45429b087dd7d5bc78bb40dc7cf06425c252d67' 2025-08-14T21:23:41.1525355Z Submodule path 'third_party/sleef': checked out '5a1d179df9cf652951b59010a2d2075372d67f68' 2025-08-14T21:23:41.1829511Z Submodule path 'third_party/tensorpipe': checked out 'dacda0567d9f23d4bc503e1c4f84aa65f33ac38a' 2025-08-14T21:23:41.1847029Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/tensorpipe/third_party/googletest' 2025-08-14T21:23:41.1848553Z Submodule 'third_party/libnop' (https://github.com/google/libnop.git) registered for path 'third_party/tensorpipe/third_party/libnop' 2025-08-14T21:23:41.1860040Z Submodule 'third_party/libuv' (https://github.com/libuv/libuv.git) registered for path 'third_party/tensorpipe/third_party/libuv' 2025-08-14T21:23:41.1862462Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T21:23:41.1887977Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/googletest'... 2025-08-14T21:23:42.0673025Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libnop'... 2025-08-14T21:23:42.1670911Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libuv'... 2025-08-14T21:23:42.3809131Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11'... 2025-08-14T21:23:42.4461541Z Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' 2025-08-14T21:23:42.4658982Z Submodule path 'third_party/tensorpipe/third_party/libnop': checked out '910b55815be16109f04f4180e9adee14fb4ce281' 2025-08-14T21:23:42.5490653Z Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '5152db2cbfeb5582e9c27c5ea1dba2cd9e10759b' 2025-08-14T21:23:42.5823811Z Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' 2025-08-14T21:23:42.5838272Z Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T21:23:42.5866408Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11/tools/clang'... 2025-08-14T21:23:42.7898062Z Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2025-08-14T21:23:42.7943422Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0 2025-08-14T21:23:42.8285813Z Entering 'android/libs/fbjni' 2025-08-14T21:23:42.8332236Z Entering 'third_party/FP16' 2025-08-14T21:23:42.8383740Z Entering 'third_party/FXdiv' 2025-08-14T21:23:42.8421720Z Entering 'third_party/NNPACK' 2025-08-14T21:23:42.8479138Z Entering 'third_party/NVTX' 2025-08-14T21:23:42.8522881Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T21:23:42.8565567Z Entering 'third_party/XNNPACK' 2025-08-14T21:23:42.8631448Z Entering 'third_party/aiter' 2025-08-14T21:23:42.8675407Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T21:23:42.8751470Z Entering 'third_party/benchmark' 2025-08-14T21:23:42.8797800Z Entering 'third_party/composable_kernel' 2025-08-14T21:23:42.8852518Z Entering 'third_party/cpp-httplib' 2025-08-14T21:23:42.8898483Z Entering 'third_party/cpuinfo' 2025-08-14T21:23:42.8939770Z Entering 'third_party/cudnn_frontend' 2025-08-14T21:23:42.8992685Z Entering 'third_party/cutlass' 2025-08-14T21:23:42.9054820Z Entering 'third_party/fbgemm' 2025-08-14T21:23:42.9099387Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T21:23:42.9145682Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T21:23:42.9200361Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T21:23:42.9253189Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T21:23:42.9302122Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T21:23:42.9347339Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T21:23:42.9391202Z Entering 'third_party/fbgemm/external/json' 2025-08-14T21:23:42.9445427Z Entering 'third_party/flash-attention' 2025-08-14T21:23:42.9487441Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T21:23:42.9544861Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T21:23:42.9594947Z Entering 'third_party/flatbuffers' 2025-08-14T21:23:42.9650131Z Entering 'third_party/fmt' 2025-08-14T21:23:42.9697242Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T21:23:42.9747014Z Entering 'third_party/gloo' 2025-08-14T21:23:42.9804061Z Entering 'third_party/googletest' 2025-08-14T21:23:42.9842276Z Entering 'third_party/ideep' 2025-08-14T21:23:42.9894995Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T21:23:42.9939940Z Entering 'third_party/ittapi' 2025-08-14T21:23:42.9983838Z Entering 'third_party/kineto' 2025-08-14T21:23:43.0030109Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T21:23:43.0084172Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T21:23:43.0140660Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T21:23:43.0184102Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T21:23:43.0231208Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T21:23:43.0281196Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T21:23:43.0321004Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T21:23:43.0369260Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T21:23:43.0407172Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T21:23:43.0448556Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T21:23:43.0504007Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T21:23:43.0544174Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T21:23:43.0586944Z Entering 'third_party/kleidiai' 2025-08-14T21:23:43.0634553Z Entering 'third_party/mimalloc' 2025-08-14T21:23:43.0683914Z Entering 'third_party/nlohmann' 2025-08-14T21:23:43.0738141Z Entering 'third_party/onnx' 2025-08-14T21:23:43.0799602Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T21:23:43.0842883Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T21:23:43.0892246Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T21:23:43.0938237Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T21:23:43.0987098Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T21:23:43.1030030Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T21:23:43.1083993Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T21:23:43.1118422Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T21:23:43.1171167Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T21:23:43.1212793Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T21:23:43.1255802Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T21:23:43.1307818Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T21:23:43.1378664Z Entering 'third_party/pocketfft' 2025-08-14T21:23:43.1431646Z Entering 'third_party/protobuf' 2025-08-14T21:23:43.1485951Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T21:23:43.1533467Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T21:23:43.1580214Z Entering 'third_party/psimd' 2025-08-14T21:23:43.1634216Z Entering 'third_party/pthreadpool' 2025-08-14T21:23:43.1684040Z Entering 'third_party/pybind11' 2025-08-14T21:23:43.1732988Z Entering 'third_party/python-peachpy' 2025-08-14T21:23:43.1781464Z Entering 'third_party/sleef' 2025-08-14T21:23:43.1827593Z Entering 'third_party/tensorpipe' 2025-08-14T21:23:43.1870531Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T21:23:43.1911949Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T21:23:43.1951303Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T21:23:43.2002007Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T21:23:43.2056598Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T21:23:43.2110454Z ##[endgroup] 2025-08-14T21:23:43.2110889Z ##[group]Persisting credentials for submodules 2025-08-14T21:23:43.2114469Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || :" 2025-08-14T21:23:43.2457191Z Entering 'android/libs/fbjni' 2025-08-14T21:23:43.2529814Z Entering 'third_party/FP16' 2025-08-14T21:23:43.2595413Z Entering 'third_party/FXdiv' 2025-08-14T21:23:43.2665069Z Entering 'third_party/NNPACK' 2025-08-14T21:23:43.2722433Z Entering 'third_party/NVTX' 2025-08-14T21:23:43.2783027Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T21:23:43.2843650Z Entering 'third_party/XNNPACK' 2025-08-14T21:23:43.2915463Z Entering 'third_party/aiter' 2025-08-14T21:23:43.2984129Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T21:23:43.3054825Z Entering 'third_party/benchmark' 2025-08-14T21:23:43.3115577Z Entering 'third_party/composable_kernel' 2025-08-14T21:23:43.3187863Z Entering 'third_party/cpp-httplib' 2025-08-14T21:23:43.3245089Z Entering 'third_party/cpuinfo' 2025-08-14T21:23:43.3318195Z Entering 'third_party/cudnn_frontend' 2025-08-14T21:23:43.3385426Z Entering 'third_party/cutlass' 2025-08-14T21:23:43.3449524Z Entering 'third_party/fbgemm' 2025-08-14T21:23:43.3509602Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T21:23:43.3582139Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T21:23:43.3648632Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T21:23:43.3709895Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T21:23:43.3780891Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T21:23:43.3838938Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T21:23:43.3897724Z Entering 'third_party/fbgemm/external/json' 2025-08-14T21:23:43.3958609Z Entering 'third_party/flash-attention' 2025-08-14T21:23:43.4017409Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T21:23:43.4075773Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T21:23:43.4148356Z Entering 'third_party/flatbuffers' 2025-08-14T21:23:43.4221718Z Entering 'third_party/fmt' 2025-08-14T21:23:43.4285607Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T21:23:43.4346582Z Entering 'third_party/gloo' 2025-08-14T21:23:43.4410094Z Entering 'third_party/googletest' 2025-08-14T21:23:43.4468471Z Entering 'third_party/ideep' 2025-08-14T21:23:43.4526660Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T21:23:43.4598538Z Entering 'third_party/ittapi' 2025-08-14T21:23:43.4651162Z Entering 'third_party/kineto' 2025-08-14T21:23:43.4709432Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T21:23:43.4770695Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T21:23:43.4839575Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T21:23:43.4893594Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T21:23:43.4957634Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T21:23:43.5017143Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T21:23:43.5072690Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T21:23:43.5140630Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T21:23:43.5205336Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T21:23:43.5272348Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T21:23:43.5335865Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T21:23:43.5402331Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T21:23:43.5462372Z Entering 'third_party/kleidiai' 2025-08-14T21:23:43.5521903Z Entering 'third_party/mimalloc' 2025-08-14T21:23:43.5582692Z Entering 'third_party/nlohmann' 2025-08-14T21:23:43.5654863Z Entering 'third_party/onnx' 2025-08-14T21:23:43.5727626Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T21:23:43.5797073Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T21:23:43.5865413Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T21:23:43.5927775Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T21:23:43.5987222Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T21:23:43.6054064Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T21:23:43.6117056Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T21:23:43.6170771Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T21:23:43.6233056Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T21:23:43.6292524Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T21:23:43.6363449Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T21:23:43.6409907Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T21:23:43.6494149Z Entering 'third_party/pocketfft' 2025-08-14T21:23:43.6555233Z Entering 'third_party/protobuf' 2025-08-14T21:23:43.6619825Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T21:23:43.6696014Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T21:23:43.6752326Z Entering 'third_party/psimd' 2025-08-14T21:23:43.6816530Z Entering 'third_party/pthreadpool' 2025-08-14T21:23:43.6884759Z Entering 'third_party/pybind11' 2025-08-14T21:23:43.6944509Z Entering 'third_party/python-peachpy' 2025-08-14T21:23:43.7006819Z Entering 'third_party/sleef' 2025-08-14T21:23:43.7072929Z Entering 'third_party/tensorpipe' 2025-08-14T21:23:43.7145523Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T21:23:43.7200351Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T21:23:43.7263642Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T21:23:43.7318590Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T21:23:43.7380797Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T21:23:43.7454787Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url" 2025-08-14T21:23:43.7795438Z Entering 'android/libs/fbjni' 2025-08-14T21:23:43.7859655Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2025-08-14T21:23:43.7873967Z Entering 'third_party/FP16' 2025-08-14T21:23:43.7930500Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2025-08-14T21:23:43.7945994Z Entering 'third_party/FXdiv' 2025-08-14T21:23:43.8016011Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2025-08-14T21:23:43.8031124Z Entering 'third_party/NNPACK' 2025-08-14T21:23:43.8093658Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2025-08-14T21:23:43.8112519Z Entering 'third_party/NVTX' 2025-08-14T21:23:43.8174864Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config remote.origin.url 2025-08-14T21:23:43.8190298Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T21:23:43.8251383Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2025-08-14T21:23:43.8276359Z Entering 'third_party/XNNPACK' 2025-08-14T21:23:43.8344085Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2025-08-14T21:23:43.8380207Z Entering 'third_party/aiter' 2025-08-14T21:23:43.8446615Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config remote.origin.url 2025-08-14T21:23:43.8465693Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T21:23:43.8519815Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config remote.origin.url 2025-08-14T21:23:43.8542981Z Entering 'third_party/benchmark' 2025-08-14T21:23:43.8610374Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2025-08-14T21:23:43.8623920Z Entering 'third_party/composable_kernel' 2025-08-14T21:23:43.8682460Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config remote.origin.url 2025-08-14T21:23:43.8708371Z Entering 'third_party/cpp-httplib' 2025-08-14T21:23:43.8769750Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config remote.origin.url 2025-08-14T21:23:43.8786961Z Entering 'third_party/cpuinfo' 2025-08-14T21:23:43.8855069Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2025-08-14T21:23:43.8867901Z Entering 'third_party/cudnn_frontend' 2025-08-14T21:23:43.8929284Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2025-08-14T21:23:43.8944974Z Entering 'third_party/cutlass' 2025-08-14T21:23:43.9011311Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2025-08-14T21:23:43.9034383Z Entering 'third_party/fbgemm' 2025-08-14T21:23:43.9088789Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2025-08-14T21:23:43.9109382Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T21:23:43.9171694Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config remote.origin.url 2025-08-14T21:23:43.9190875Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T21:23:43.9247359Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config remote.origin.url 2025-08-14T21:23:43.9272551Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T21:23:43.9330997Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config remote.origin.url 2025-08-14T21:23:43.9350893Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T21:23:43.9405532Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config remote.origin.url 2025-08-14T21:23:43.9436438Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T21:23:43.9490453Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config remote.origin.url 2025-08-14T21:23:43.9510594Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T21:23:43.9575486Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config remote.origin.url 2025-08-14T21:23:43.9595316Z Entering 'third_party/fbgemm/external/json' 2025-08-14T21:23:43.9657686Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config remote.origin.url 2025-08-14T21:23:43.9683270Z Entering 'third_party/flash-attention' 2025-08-14T21:23:43.9744095Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config remote.origin.url 2025-08-14T21:23:43.9769571Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T21:23:43.9830835Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config remote.origin.url 2025-08-14T21:23:43.9858058Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T21:23:43.9915158Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config remote.origin.url 2025-08-14T21:23:43.9941802Z Entering 'third_party/flatbuffers' 2025-08-14T21:23:44.0002433Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2025-08-14T21:23:44.0018994Z Entering 'third_party/fmt' 2025-08-14T21:23:44.0078732Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2025-08-14T21:23:44.0103219Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T21:23:44.0146050Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2025-08-14T21:23:44.0165405Z Entering 'third_party/gloo' 2025-08-14T21:23:44.0219565Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2025-08-14T21:23:44.0233529Z Entering 'third_party/googletest' 2025-08-14T21:23:44.0294461Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2025-08-14T21:23:44.0318566Z Entering 'third_party/ideep' 2025-08-14T21:23:44.0378754Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2025-08-14T21:23:44.0393190Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T21:23:44.0447440Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2025-08-14T21:23:44.0477693Z Entering 'third_party/ittapi' 2025-08-14T21:23:44.0535721Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2025-08-14T21:23:44.0553456Z Entering 'third_party/kineto' 2025-08-14T21:23:44.0610301Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2025-08-14T21:23:44.0624877Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T21:23:44.0685486Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config remote.origin.url 2025-08-14T21:23:44.0712042Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T21:23:44.0756910Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config remote.origin.url 2025-08-14T21:23:44.0772466Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T21:23:44.0824631Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config remote.origin.url 2025-08-14T21:23:44.0844297Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T21:23:44.0905325Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config remote.origin.url 2025-08-14T21:23:44.0925498Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T21:23:44.0989053Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config remote.origin.url 2025-08-14T21:23:44.1003107Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T21:23:44.1060978Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config remote.origin.url 2025-08-14T21:23:44.1085543Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T21:23:44.1143557Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config remote.origin.url 2025-08-14T21:23:44.1159270Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T21:23:44.1217924Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config remote.origin.url 2025-08-14T21:23:44.1236356Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T21:23:44.1293682Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config remote.origin.url 2025-08-14T21:23:44.1304622Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T21:23:44.1355885Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config remote.origin.url 2025-08-14T21:23:44.1377551Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T21:23:44.1435459Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2025-08-14T21:23:44.1452279Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T21:23:44.1505079Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2025-08-14T21:23:44.1526185Z Entering 'third_party/kleidiai' 2025-08-14T21:23:44.1583785Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config remote.origin.url 2025-08-14T21:23:44.1599023Z Entering 'third_party/mimalloc' 2025-08-14T21:23:44.1653419Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config remote.origin.url 2025-08-14T21:23:44.1668013Z Entering 'third_party/nlohmann' 2025-08-14T21:23:44.1742610Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2025-08-14T21:23:44.1770724Z Entering 'third_party/onnx' 2025-08-14T21:23:44.1812941Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2025-08-14T21:23:44.1846567Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T21:23:44.1905257Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2025-08-14T21:23:44.1921335Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T21:23:44.1987550Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config remote.origin.url 2025-08-14T21:23:44.2012453Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T21:23:44.2073750Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config remote.origin.url 2025-08-14T21:23:44.2091497Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T21:23:44.2158754Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config remote.origin.url 2025-08-14T21:23:44.2191374Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T21:23:44.2239417Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config remote.origin.url 2025-08-14T21:23:44.2254454Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T21:23:44.2309636Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config remote.origin.url 2025-08-14T21:23:44.2331532Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T21:23:44.2393517Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config remote.origin.url 2025-08-14T21:23:44.2407595Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T21:23:44.2464303Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config remote.origin.url 2025-08-14T21:23:44.2482436Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T21:23:44.2537799Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config remote.origin.url 2025-08-14T21:23:44.2555342Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T21:23:44.2611295Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-08-14T21:23:44.2641656Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T21:23:44.2697759Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-08-14T21:23:44.2713412Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T21:23:44.2771317Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config remote.origin.url 2025-08-14T21:23:44.2813674Z Entering 'third_party/pocketfft' 2025-08-14T21:23:44.2869719Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2025-08-14T21:23:44.2887239Z Entering 'third_party/protobuf' 2025-08-14T21:23:44.2939645Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2025-08-14T21:23:44.2959573Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T21:23:44.3004371Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2025-08-14T21:23:44.3018383Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T21:23:44.3074378Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2025-08-14T21:23:44.3090443Z Entering 'third_party/psimd' 2025-08-14T21:23:44.3145502Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2025-08-14T21:23:44.3165137Z Entering 'third_party/pthreadpool' 2025-08-14T21:23:44.3230658Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2025-08-14T21:23:44.3250152Z Entering 'third_party/pybind11' 2025-08-14T21:23:44.3306653Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2025-08-14T21:23:44.3321668Z Entering 'third_party/python-peachpy' 2025-08-14T21:23:44.3380515Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2025-08-14T21:23:44.3394863Z Entering 'third_party/sleef' 2025-08-14T21:23:44.3455029Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2025-08-14T21:23:44.3477873Z Entering 'third_party/tensorpipe' 2025-08-14T21:23:44.3548333Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2025-08-14T21:23:44.3566306Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T21:23:44.3626772Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2025-08-14T21:23:44.3640767Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T21:23:44.3700095Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2025-08-14T21:23:44.3713348Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T21:23:44.3773650Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2025-08-14T21:23:44.3796906Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T21:23:44.3884087Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2025-08-14T21:23:44.3906588Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T21:23:44.3965601Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2025-08-14T21:23:44.4646410Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'git@github.com:' 2025-08-14T21:23:44.5018813Z Entering 'android/libs/fbjni' 2025-08-14T21:23:44.5066311Z Entering 'third_party/FP16' 2025-08-14T21:23:44.5110005Z Entering 'third_party/FXdiv' 2025-08-14T21:23:44.5153062Z Entering 'third_party/NNPACK' 2025-08-14T21:23:44.5208401Z Entering 'third_party/NVTX' 2025-08-14T21:23:44.5258733Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T21:23:44.5305959Z Entering 'third_party/XNNPACK' 2025-08-14T21:23:44.5367997Z Entering 'third_party/aiter' 2025-08-14T21:23:44.5417298Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T21:23:44.5474195Z Entering 'third_party/benchmark' 2025-08-14T21:23:44.5526685Z Entering 'third_party/composable_kernel' 2025-08-14T21:23:44.5577200Z Entering 'third_party/cpp-httplib' 2025-08-14T21:23:44.5623752Z Entering 'third_party/cpuinfo' 2025-08-14T21:23:44.5677291Z Entering 'third_party/cudnn_frontend' 2025-08-14T21:23:44.5725197Z Entering 'third_party/cutlass' 2025-08-14T21:23:44.5788790Z Entering 'third_party/fbgemm' 2025-08-14T21:23:44.5839596Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T21:23:44.5890032Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T21:23:44.5939142Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T21:23:44.5987086Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T21:23:44.6049293Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T21:23:44.6093791Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T21:23:44.6135002Z Entering 'third_party/fbgemm/external/json' 2025-08-14T21:23:44.6181107Z Entering 'third_party/flash-attention' 2025-08-14T21:23:44.6237091Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T21:23:44.6286691Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T21:23:44.6351695Z Entering 'third_party/flatbuffers' 2025-08-14T21:23:44.6414074Z Entering 'third_party/fmt' 2025-08-14T21:23:44.6469193Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T21:23:44.6516855Z Entering 'third_party/gloo' 2025-08-14T21:23:44.6571238Z Entering 'third_party/googletest' 2025-08-14T21:23:44.6632169Z Entering 'third_party/ideep' 2025-08-14T21:23:44.6677064Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T21:23:44.6732740Z Entering 'third_party/ittapi' 2025-08-14T21:23:44.6789471Z Entering 'third_party/kineto' 2025-08-14T21:23:44.6831924Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T21:23:44.6879249Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T21:23:44.6934042Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T21:23:44.6977062Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T21:23:44.7019817Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T21:23:44.7063263Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T21:23:44.7110036Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T21:23:44.7162005Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T21:23:44.7208253Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T21:23:44.7256415Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T21:23:44.7298978Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T21:23:44.7355360Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T21:23:44.7402767Z Entering 'third_party/kleidiai' 2025-08-14T21:23:44.7451173Z Entering 'third_party/mimalloc' 2025-08-14T21:23:44.7498245Z Entering 'third_party/nlohmann' 2025-08-14T21:23:44.7542222Z Entering 'third_party/onnx' 2025-08-14T21:23:44.7605257Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T21:23:44.7661904Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T21:23:44.7715732Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T21:23:44.7759927Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T21:23:44.7805647Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T21:23:44.7861831Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T21:23:44.7904785Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T21:23:44.7951581Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T21:23:44.8005745Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T21:23:44.8047095Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T21:23:44.8091847Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T21:23:44.8148586Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T21:23:44.8219852Z Entering 'third_party/pocketfft' 2025-08-14T21:23:44.8275775Z Entering 'third_party/protobuf' 2025-08-14T21:23:44.8324470Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T21:23:44.8364700Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T21:23:44.8408469Z Entering 'third_party/psimd' 2025-08-14T21:23:44.8456116Z Entering 'third_party/pthreadpool' 2025-08-14T21:23:44.8500793Z Entering 'third_party/pybind11' 2025-08-14T21:23:44.8544749Z Entering 'third_party/python-peachpy' 2025-08-14T21:23:44.8587644Z Entering 'third_party/sleef' 2025-08-14T21:23:44.8634282Z Entering 'third_party/tensorpipe' 2025-08-14T21:23:44.8683772Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T21:23:44.8720786Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T21:23:44.8763555Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T21:23:44.8816233Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T21:23:44.8854964Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T21:23:44.8924089Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'org-21003710@github.com:' 2025-08-14T21:23:44.9267473Z Entering 'android/libs/fbjni' 2025-08-14T21:23:44.9311400Z Entering 'third_party/FP16' 2025-08-14T21:23:44.9358781Z Entering 'third_party/FXdiv' 2025-08-14T21:23:44.9412768Z Entering 'third_party/NNPACK' 2025-08-14T21:23:44.9457486Z Entering 'third_party/NVTX' 2025-08-14T21:23:44.9515017Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T21:23:44.9559995Z Entering 'third_party/XNNPACK' 2025-08-14T21:23:44.9627149Z Entering 'third_party/aiter' 2025-08-14T21:23:44.9675376Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T21:23:44.9730854Z Entering 'third_party/benchmark' 2025-08-14T21:23:44.9774280Z Entering 'third_party/composable_kernel' 2025-08-14T21:23:44.9835418Z Entering 'third_party/cpp-httplib' 2025-08-14T21:23:44.9884577Z Entering 'third_party/cpuinfo' 2025-08-14T21:23:44.9932342Z Entering 'third_party/cudnn_frontend' 2025-08-14T21:23:44.9984076Z Entering 'third_party/cutlass' 2025-08-14T21:23:45.0038541Z Entering 'third_party/fbgemm' 2025-08-14T21:23:45.0090229Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T21:23:45.0166449Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T21:23:45.0222523Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T21:23:45.0270168Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T21:23:45.0328268Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T21:23:45.0371190Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T21:23:45.0424527Z Entering 'third_party/fbgemm/external/json' 2025-08-14T21:23:45.0472636Z Entering 'third_party/flash-attention' 2025-08-14T21:23:45.0525437Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T21:23:45.0578324Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T21:23:45.0635470Z Entering 'third_party/flatbuffers' 2025-08-14T21:23:45.0702631Z Entering 'third_party/fmt' 2025-08-14T21:23:45.0762255Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T21:23:45.0810197Z Entering 'third_party/gloo' 2025-08-14T21:23:45.0861968Z Entering 'third_party/googletest' 2025-08-14T21:23:45.0909262Z Entering 'third_party/ideep' 2025-08-14T21:23:45.0954376Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T21:23:45.1009606Z Entering 'third_party/ittapi' 2025-08-14T21:23:45.1065657Z Entering 'third_party/kineto' 2025-08-14T21:23:45.1107224Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T21:23:45.1163883Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T21:23:45.1211001Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T21:23:45.1256520Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T21:23:45.1311800Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T21:23:45.1351848Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T21:23:45.1400349Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T21:23:45.1445061Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T21:23:45.1487355Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T21:23:45.1529746Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T21:23:45.1573852Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T21:23:45.1620726Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T21:23:45.1664062Z Entering 'third_party/kleidiai' 2025-08-14T21:23:45.1719563Z Entering 'third_party/mimalloc' 2025-08-14T21:23:45.1772026Z Entering 'third_party/nlohmann' 2025-08-14T21:23:45.1819962Z Entering 'third_party/onnx' 2025-08-14T21:23:45.1891788Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T21:23:45.1933089Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T21:23:45.1983958Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T21:23:45.2026150Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T21:23:45.2067484Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T21:23:45.2109997Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T21:23:45.2161696Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T21:23:45.2197945Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T21:23:45.2244314Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T21:23:45.2294138Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T21:23:45.2344898Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T21:23:45.2388609Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T21:23:45.2455509Z Entering 'third_party/pocketfft' 2025-08-14T21:23:45.2502595Z Entering 'third_party/protobuf' 2025-08-14T21:23:45.2550545Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T21:23:45.2591409Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T21:23:45.2642146Z Entering 'third_party/psimd' 2025-08-14T21:23:45.2690577Z Entering 'third_party/pthreadpool' 2025-08-14T21:23:45.2742283Z Entering 'third_party/pybind11' 2025-08-14T21:23:45.2790669Z Entering 'third_party/python-peachpy' 2025-08-14T21:23:45.2837784Z Entering 'third_party/sleef' 2025-08-14T21:23:45.2895627Z Entering 'third_party/tensorpipe' 2025-08-14T21:23:45.2952628Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T21:23:45.2992499Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T21:23:45.3048214Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T21:23:45.3084487Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T21:23:45.3134454Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T21:23:45.3191588Z ##[endgroup] 2025-08-14T21:23:45.3229768Z [command]/usr/bin/git log -1 --format=%H 2025-08-14T21:23:45.3259030Z 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:23:45.3448182Z Prepare all required actions 2025-08-14T21:23:45.3448652Z Getting action download info 2025-08-14T21:23:45.4735787Z ##[group]Run ./.github/actions/setup-linux 2025-08-14T21:23:45.4736050Z env: 2025-08-14T21:23:45.4736238Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:45.4736434Z ##[endgroup] 2025-08-14T21:23:45.4778875Z ##[group]Run set -euo pipefail 2025-08-14T21:23:45.4779169Z set -euo pipefail 2025-08-14T21:23:45.4779402Z function get_ec2_metadata() { 2025-08-14T21:23:45.4779694Z  # Pulled from instance metadata endpoint for EC2 2025-08-14T21:23:45.4780192Z  # see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html 2025-08-14T21:23:45.4780618Z  category=$1 2025-08-14T21:23:45.4780893Z  # If it is GCP runner (runner name contains gcp), do not run this 2025-08-14T21:23:45.4781220Z  runner_name_str=i-0019fc24284416ca3 2025-08-14T21:23:45.4781520Z  if [[ -f /.inarc ]]; then 2025-08-14T21:23:45.4781788Z  echo "ARC Runner, no info on ec2 metadata" 2025-08-14T21:23:45.4782087Z  elif [[ $runner_name_str == *"gcp"* ]]; then 2025-08-14T21:23:45.4782444Z  echo "Runner is from Google Cloud Platform, No info on ec2 metadata" 2025-08-14T21:23:45.4782762Z  else 2025-08-14T21:23:45.4789809Z  curl -H "X-aws-ec2-metadata-token: $(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 30")" -fsSL "http://169.254.169.254/latest/meta-data/${category}" 2025-08-14T21:23:45.4790606Z  fi 2025-08-14T21:23:45.4790787Z } 2025-08-14T21:23:45.4790995Z echo "ami-id: $(get_ec2_metadata ami-id)" 2025-08-14T21:23:45.4791321Z echo "instance-id: $(get_ec2_metadata instance-id)" 2025-08-14T21:23:45.4791688Z echo "instance-type: $(get_ec2_metadata instance-type)" 2025-08-14T21:23:45.4792004Z echo "system info $(uname -a)" 2025-08-14T21:23:45.4800847Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:45.4801237Z env: 2025-08-14T21:23:45.4801445Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:45.4801647Z ##[endgroup] 2025-08-14T21:23:45.4990279Z ami-id: ami-05ffe3c48a9991133 2025-08-14T21:23:45.5130488Z instance-id: i-0019fc24284416ca3 2025-08-14T21:23:45.5253784Z instance-type: m4.10xlarge 2025-08-14T21:23:45.5273159Z system info Linux ip-10-0-56-34.ec2.internal 6.1.141-155.222.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Jun 17 10:29:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux 2025-08-14T21:23:45.5296579Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-08-14T21:23:45.5297271Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-08-14T21:23:45.5302715Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:45.5303011Z env: 2025-08-14T21:23:45.5303193Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:45.5303403Z ##[endgroup] 2025-08-14T21:23:45.5355394Z ##[group]Run if systemctl is-active --quiet docker; then 2025-08-14T21:23:45.5355754Z if systemctl is-active --quiet docker; then 2025-08-14T21:23:45.5356053Z  echo "Docker daemon is running..."; 2025-08-14T21:23:45.5356295Z else 2025-08-14T21:23:45.5356577Z  echo "Starting docker daemon..." && sudo systemctl start docker; 2025-08-14T21:23:45.5356901Z fi 2025-08-14T21:23:45.5362015Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:45.5362326Z env: 2025-08-14T21:23:45.5362517Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:45.5362728Z ##[endgroup] 2025-08-14T21:23:45.5434193Z Docker daemon is running... 2025-08-14T21:23:45.5471548Z ##[group]Run nick-fields/retry@v3.0.0 2025-08-14T21:23:45.5471795Z with: 2025-08-14T21:23:45.5471971Z shell: bash 2025-08-14T21:23:45.5472300Z timeout_minutes: 5 2025-08-14T21:23:45.5472497Z max_attempts: 3 2025-08-14T21:23:45.5472694Z retry_wait_seconds: 30 2025-08-14T21:23:45.5474343Z command: AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\") aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \ --password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" # For LF Runners we need to make sure we also login to Meta's ECR docker registry too. META_AWS_ACCOUNT_ID=308535385114 if [ "$AWS_ACCOUNT_ID" != "$META_AWS_ACCOUNT_ID" ] ; then aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \ --password-stdin "$META_AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" fi 2025-08-14T21:23:45.5475979Z polling_interval_seconds: 1 2025-08-14T21:23:45.5476200Z warning_on_retry: true 2025-08-14T21:23:45.5476406Z continue_on_error: false 2025-08-14T21:23:45.5476605Z env: 2025-08-14T21:23:45.5476774Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:45.5476985Z AWS_RETRY_MODE: standard 2025-08-14T21:23:45.5477187Z AWS_MAX_ATTEMPTS: 5 2025-08-14T21:23:45.5477390Z AWS_DEFAULT_REGION: us-east-1 2025-08-14T21:23:45.5477595Z ##[endgroup] 2025-08-14T21:23:46.7394663Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-08-14T21:23:46.7395290Z Configure a credential helper to remove this warning. See 2025-08-14T21:23:46.7395739Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-08-14T21:23:46.7396033Z 2025-08-14T21:23:46.7396121Z Login Succeeded 2025-08-14T21:23:47.6415155Z Command completed after 1 attempt(s). 2025-08-14T21:23:47.6479951Z ##[group]Run env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-08-14T21:23:47.6480377Z env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-08-14T21:23:47.6480731Z env | grep '^CI' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-08-14T21:23:47.6487102Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:47.6487395Z env: 2025-08-14T21:23:47.6487582Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:47.6487780Z ##[endgroup] 2025-08-14T21:23:47.6579195Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2025-08-14T21:23:47.6579633Z # ignore expansion of "docker ps -q" since it could be empty 2025-08-14T21:23:47.6579960Z # shellcheck disable=SC2046 2025-08-14T21:23:47.6580220Z docker stop $(docker ps -q) || true 2025-08-14T21:23:47.6580479Z # Prune all of the docker images 2025-08-14T21:23:47.6580747Z docker system prune -af 2025-08-14T21:23:47.6585618Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:47.6585897Z env: 2025-08-14T21:23:47.6586082Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:47.6586287Z ##[endgroup] 2025-08-14T21:23:47.7040084Z "docker stop" requires at least 1 argument. 2025-08-14T21:23:47.7040448Z See 'docker stop --help'. 2025-08-14T21:23:47.7040641Z 2025-08-14T21:23:47.7040852Z Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...] 2025-08-14T21:23:47.7049708Z 2025-08-14T21:23:47.7049832Z Stop one or more running containers 2025-08-14T21:23:47.7240179Z Total reclaimed space: 0B 2025-08-14T21:23:47.7273016Z ##[group]Run set +e 2025-08-14T21:23:47.7273266Z set +e 2025-08-14T21:23:47.7273464Z set -x 2025-08-14T21:23:47.7273641Z  2025-08-14T21:23:47.7273835Z PT_DOMAIN=download.pytorch.org 2025-08-14T21:23:47.7274289Z # TODO: Flaky access to download.pytorch.org https://github.com/pytorch/pytorch/issues/100400, 2025-08-14T21:23:47.7274872Z # cleaning this up once the issue is fixed. There are more than one resolved IP here, the last 2025-08-14T21:23:47.7275268Z # one is returned at random 2025-08-14T21:23:47.7275567Z RESOLVED_IP=$(dig -4 +short "${PT_DOMAIN}" | tail -n1) 2025-08-14T21:23:47.7275855Z  2025-08-14T21:23:47.7276183Z if [ -z "${RESOLVED_IP}" ]; then 2025-08-14T21:23:47.7276511Z  echo "Couldn't resolve ${PT_DOMAIN}, retrying with Google DNS..." 2025-08-14T21:23:47.7276908Z  RESOLVED_IP=$(dig -4 +short "${PT_DOMAIN}" @8.8.8.8 | tail -n1) 2025-08-14T21:23:47.7277211Z  2025-08-14T21:23:47.7277405Z  if [ -z "${RESOLVED_IP}" ]; then 2025-08-14T21:23:47.7277692Z  echo "Couldn't resolve ${PT_DOMAIN}, exiting..." 2025-08-14T21:23:47.7277975Z  exit 1 2025-08-14T21:23:47.7278163Z  fi 2025-08-14T21:23:47.7278329Z fi 2025-08-14T21:23:47.7278497Z  2025-08-14T21:23:47.7278703Z if grep -r "${PT_DOMAIN}" /etc/hosts; then 2025-08-14T21:23:47.7278976Z  # Clean up any old records first 2025-08-14T21:23:47.7279249Z  sudo sed -i "/${PT_DOMAIN}/d" /etc/hosts 2025-08-14T21:23:47.7279491Z fi 2025-08-14T21:23:47.7279656Z  2025-08-14T21:23:47.7279892Z echo "${RESOLVED_IP} ${PT_DOMAIN}" | sudo tee -a /etc/hosts 2025-08-14T21:23:47.7280187Z cat /etc/hosts 2025-08-14T21:23:47.7286035Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:47.7286312Z env: 2025-08-14T21:23:47.7286494Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:47.7286702Z ##[endgroup] 2025-08-14T21:23:47.7322615Z + PT_DOMAIN=download.pytorch.org 2025-08-14T21:23:47.7323759Z ++ dig -4 +short download.pytorch.org 2025-08-14T21:23:47.7324557Z ++ tail -n1 2025-08-14T21:23:47.7716347Z + RESOLVED_IP=18.160.10.22 2025-08-14T21:23:47.7716589Z + '[' -z 18.160.10.22 ']' 2025-08-14T21:23:47.7716835Z + grep -r download.pytorch.org /etc/hosts 2025-08-14T21:23:47.7738412Z + sudo tee -a /etc/hosts 2025-08-14T21:23:47.7745808Z + echo '18.160.10.22 download.pytorch.org' 2025-08-14T21:23:48.1449245Z 18.160.10.22 download.pytorch.org 2025-08-14T21:23:48.1470571Z + cat /etc/hosts 2025-08-14T21:23:48.1483268Z 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 2025-08-14T21:23:48.1488541Z ::1 localhost6 localhost6.localdomain6 2025-08-14T21:23:48.1488867Z 18.160.10.22 download.pytorch.org 2025-08-14T21:23:48.1656479Z ##[group]Run pytorch/test-infra/.github/actions/calculate-docker-image@main 2025-08-14T21:23:48.1656850Z with: 2025-08-14T21:23:48.1657486Z docker-image-name: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:48.1658175Z use-custom-docker-registry: true 2025-08-14T21:23:48.1658433Z docker-build-dir: .ci/docker 2025-08-14T21:23:48.1658673Z docker-build-script: ./build.sh 2025-08-14T21:23:48.1658920Z working-directory: . 2025-08-14T21:23:48.1659187Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:23:48.1659491Z force-push: false 2025-08-14T21:23:48.1659679Z env: 2025-08-14T21:23:48.1659847Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:48.1660050Z ##[endgroup] 2025-08-14T21:23:48.1680006Z ##[group]Run set -ex 2025-08-14T21:23:48.1680257Z set -ex 2025-08-14T21:23:48.1680507Z  2025-08-14T21:23:48.1680947Z # If the docker build directory or the build script doesn't exist, the action will 2025-08-14T21:23:48.1681510Z # gracefully return the docker image name as it is. Pulling docker image in Linux 2025-08-14T21:23:48.1681934Z # job could then download the pre-built image as usual 2025-08-14T21:23:48.1682455Z if [[ -d "${DOCKER_BUILD_DIR}" ]] && [[ -f "${DOCKER_BUILD_DIR}/${DOCKER_BUILD_SCRIPT}" ]] && [[ "${USE_CUSTOM_DOCKER_REGISTRY}" == "true" ]]; then 2025-08-14T21:23:48.1682946Z  echo "skip=false" >> "${GITHUB_OUTPUT}" 2025-08-14T21:23:48.1683196Z else 2025-08-14T21:23:48.1683413Z  echo "skip=true" >> "${GITHUB_OUTPUT}" 2025-08-14T21:23:48.1683756Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:23:48.1684058Z  2025-08-14T21:23:48.1684468Z  echo "Not using custom ECR registry. Either it was not requested or there is no Docker build script in the ${REPO_NAME} repo..." 2025-08-14T21:23:48.1684999Z  exit 0 2025-08-14T21:23:48.1685179Z fi 2025-08-14T21:23:48.1685341Z  2025-08-14T21:23:48.1685609Z if [[ "${DOCKER_IMAGE_NAME}" == *"${DOCKER_REGISTRY}/${REPO_NAME}"* ]]; then 2025-08-14T21:23:48.1686057Z  # The docker image name already includes the ECR prefix and tag, so we can just 2025-08-14T21:23:48.1686454Z  # use it as it is, but first let's extract the tag 2025-08-14T21:23:48.1686812Z  DOCKER_TAG=$(echo "${DOCKER_IMAGE_NAME}" | awk -F '[:,]' '{print $2}') 2025-08-14T21:23:48.1687196Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:23:48.1687559Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:23:48.1687871Z else 2025-08-14T21:23:48.1688076Z  if [[ "${DOCKER_IMAGE_NAME}" == *:* ]]; then 2025-08-14T21:23:48.1688366Z  CUSTOM_TAG_PREFIX=${DOCKER_IMAGE_NAME#*:} 2025-08-14T21:23:48.1688669Z  DOCKER_IMAGE_NAME=${DOCKER_IMAGE_NAME%%:*} 2025-08-14T21:23:48.1688917Z  fi 2025-08-14T21:23:48.1689260Z  DOCKER_TAG=${CUSTOM_TAG_PREFIX:+${CUSTOM_TAG_PREFIX}-}$(git rev-parse HEAD:"${DOCKER_BUILD_DIR}") 2025-08-14T21:23:48.1689712Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:23:48.1690177Z  echo "docker-image=${DOCKER_REGISTRY}/${REPO_NAME}/${DOCKER_IMAGE_NAME}:${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:23:48.1690813Z  echo "custom-tag-prefix=${CUSTOM_TAG_PREFIX}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:23:48.1691128Z fi 2025-08-14T21:23:48.1706065Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:48.1706348Z env: 2025-08-14T21:23:48.1706533Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:48.1706744Z REPO_NAME: pytorch 2025-08-14T21:23:48.1707511Z DOCKER_IMAGE_NAME: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:48.1708187Z DOCKER_BUILD_DIR: .ci/docker 2025-08-14T21:23:48.1708420Z DOCKER_BUILD_SCRIPT: ./build.sh 2025-08-14T21:23:48.1708726Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:23:48.1709031Z USE_CUSTOM_DOCKER_REGISTRY: true 2025-08-14T21:23:48.1709265Z CUSTOM_TAG_PREFIX: 2025-08-14T21:23:48.1709515Z ##[endgroup] 2025-08-14T21:23:48.1735667Z + [[ -d .ci/docker ]] 2025-08-14T21:23:48.1735928Z + [[ -f .ci/docker/./build.sh ]] 2025-08-14T21:23:48.1736179Z + [[ true == \t\r\u\e ]] 2025-08-14T21:23:48.1736395Z + echo skip=false 2025-08-14T21:23:48.1737564Z + [[ 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe == *\3\0\8\5\3\5\3\8\5\1\1\4\.\d\k\r\.\e\c\r\.\u\s\-\e\a\s\t\-\1\.\a\m\a\z\o\n\a\w\s\.\c\o\m\/\p\y\t\o\r\c\h* ]] 2025-08-14T21:23:48.1746415Z ++ echo 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:48.1748629Z ++ awk -F '[:,]' '{print $2}' 2025-08-14T21:23:48.1834419Z + DOCKER_TAG=pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:48.1835591Z + echo docker-tag=pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:48.1836534Z + echo docker-image=308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:48.1856425Z ##[group]Run set +e 2025-08-14T21:23:48.1856675Z set +e 2025-08-14T21:23:48.1856866Z set -x 2025-08-14T21:23:48.1857048Z  2025-08-14T21:23:48.1857211Z login() { 2025-08-14T21:23:48.1857579Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2025-08-14T21:23:48.1857978Z } 2025-08-14T21:23:48.1858141Z  2025-08-14T21:23:48.1858312Z retry () { 2025-08-14T21:23:48.1858532Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2025-08-14T21:23:48.1858774Z } 2025-08-14T21:23:48.1858943Z  2025-08-14T21:23:48.1859127Z retry login "${DOCKER_REGISTRY}" 2025-08-14T21:23:48.1859362Z  2025-08-14T21:23:48.1859529Z START_TIME=$(date +%s) 2025-08-14T21:23:48.1859760Z # Wait up to 120 minutes 2025-08-14T21:23:48.1860055Z while [[ $(( $(date +%s) - 7200 )) -lt $START_TIME ]]; do 2025-08-14T21:23:48.1860422Z  # Check if image already exists, if it does then skip building it 2025-08-14T21:23:48.1860796Z  if docker manifest inspect "${DOCKER_IMAGE}"; then 2025-08-14T21:23:48.1861074Z  exit 0 2025-08-14T21:23:48.1861253Z  fi 2025-08-14T21:23:48.1861427Z  2025-08-14T21:23:48.1861721Z  # NB: This flag is used by Docker build workflow to push the image to ECR, so we can 2025-08-14T21:23:48.1862209Z  # use this to differentiate between the Docker build and regular build jobs. For the 2025-08-14T21:23:48.1862684Z  # latter, it will wait for the Docker images to become available before continuing 2025-08-14T21:23:48.1863073Z  if [ "${DOCKER_PUSH:-false}" == "true" ]; then 2025-08-14T21:23:48.1863377Z  # It's a Docker build job, let's build the image 2025-08-14T21:23:48.1863749Z  break 2025-08-14T21:23:48.1863927Z  else 2025-08-14T21:23:48.1864195Z  # It's a regular build job, wait for the image to become available 2025-08-14T21:23:48.1864504Z  sleep 300 2025-08-14T21:23:48.1864697Z  fi 2025-08-14T21:23:48.1864874Z done 2025-08-14T21:23:48.1865046Z  2025-08-14T21:23:48.1865310Z # NB: This part requires a full checkout. Otherwise, the merge base will 2025-08-14T21:23:48.1865877Z # be empty. The default action would be to continue rebuild the image 2025-08-14T21:23:48.1866260Z if [[ "$BASE_REVISION" = "$(git rev-parse HEAD)" ]]; then 2025-08-14T21:23:48.1866604Z  # if we're on the base branch then use the parent commit 2025-08-14T21:23:48.1866908Z  MERGE_BASE=$(git rev-parse HEAD~) 2025-08-14T21:23:48.1867147Z else 2025-08-14T21:23:48.1867399Z  # otherwise we're on a PR, so use the most recent base commit 2025-08-14T21:23:48.1867756Z  MERGE_BASE=$(git merge-base HEAD "$BASE_REVISION") 2025-08-14T21:23:48.1868031Z fi 2025-08-14T21:23:48.1868206Z  2025-08-14T21:23:48.1868396Z if [[ -z "${MERGE_BASE}" ]]; then 2025-08-14T21:23:48.1868664Z  echo "rebuild=true" >> "${GITHUB_OUTPUT}" 2025-08-14T21:23:48.1868968Z  2025-08-14T21:23:48.1875428Z  echo "Finding merge base only works with full checkout, please set fetch-depth to 0, continuing ..." 2025-08-14T21:23:48.1875827Z  exit 0 2025-08-14T21:23:48.1876009Z fi 2025-08-14T21:23:48.1876178Z  2025-08-14T21:23:48.1876413Z if ! git rev-parse "${MERGE_BASE}:${DOCKER_BUILD_DIR}"; then 2025-08-14T21:23:48.1876928Z  echo "Directory '${DOCKER_BUILD_DIR}' not found in commit $MERGE_BASE, you should rebase onto a more recent commit" 2025-08-14T21:23:48.1877393Z  exit 1 2025-08-14T21:23:48.1877573Z fi 2025-08-14T21:23:48.1877737Z  2025-08-14T21:23:48.1878015Z PREVIOUS_DOCKER_TAG=$(git rev-parse "${MERGE_BASE}:${DOCKER_BUILD_DIR}") 2025-08-14T21:23:48.1878503Z # If no image exists but the hash is the same as the previous hash then we should error out here 2025-08-14T21:23:48.1878940Z if [[ "${PREVIOUS_DOCKER_TAG}" == "${DOCKER_TAG}" ]]; then 2025-08-14T21:23:48.1879429Z  echo "WARNING: Something has gone wrong and the previous image isn't available for the merge-base of your branch" 2025-08-14T21:23:48.1879990Z  echo " Will re-build docker image to store in local cache, TTS may be longer" 2025-08-14T21:23:48.1880328Z fi 2025-08-14T21:23:48.1880488Z  2025-08-14T21:23:48.1880698Z echo "rebuild=true" >> "${GITHUB_OUTPUT}" 2025-08-14T21:23:48.1886364Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:48.1886663Z env: 2025-08-14T21:23:48.1886851Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:48.1887075Z DOCKER_BUILD_DIR: .ci/docker 2025-08-14T21:23:48.1887349Z BASE_REVISION: 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:23:48.1888057Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:48.1888955Z DOCKER_TAG: pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:48.1889507Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:23:48.1889808Z DOCKER_PUSH: 2025-08-14T21:23:48.1889984Z ##[endgroup] 2025-08-14T21:23:48.1920411Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:23:48.1920831Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:23:48.1923102Z + aws ecr get-login-password --region us-east-1 2025-08-14T21:23:48.1923969Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:23:48.7849150Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-08-14T21:23:48.7849742Z Configure a credential helper to remove this warning. See 2025-08-14T21:23:48.7850192Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-08-14T21:23:48.7850485Z 2025-08-14T21:23:48.7850580Z Login Succeeded 2025-08-14T21:23:48.7865543Z ++ date +%s 2025-08-14T21:23:48.7876579Z + START_TIME=1755206628 2025-08-14T21:23:48.7878227Z ++ date +%s 2025-08-14T21:23:48.7886815Z + [[ 1755199428 -lt 1755206628 ]] 2025-08-14T21:23:48.7895637Z + docker manifest inspect 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:49.0540171Z { 2025-08-14T21:23:49.0544666Z "schemaVersion": 2, 2025-08-14T21:23:49.0545018Z "mediaType": "application/vnd.docker.distribution.manifest.v2+json", 2025-08-14T21:23:49.0545361Z "config": { 2025-08-14T21:23:49.0545657Z "mediaType": "application/vnd.docker.container.image.v1+json", 2025-08-14T21:23:49.0545965Z "size": 30151, 2025-08-14T21:23:49.0546285Z "digest": "sha256:0899ae453036ee7a91795ea95b1db61000579eeb74b140edab5976919ee64bbe" 2025-08-14T21:23:49.0546638Z }, 2025-08-14T21:23:49.0546803Z "layers": [ 2025-08-14T21:23:49.0546964Z { 2025-08-14T21:23:49.0547230Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0547551Z "size": 30448173, 2025-08-14T21:23:49.0547882Z "digest": "sha256:660ffc76f83b006444a5731b215acc2e35138d8be5cac8ed1ffd40f947117495" 2025-08-14T21:23:49.0548236Z }, 2025-08-14T21:23:49.0548389Z { 2025-08-14T21:23:49.0548638Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0549257Z "size": 1554, 2025-08-14T21:23:49.0549579Z "digest": "sha256:c7b4a852a45516e27a9256df90878663d770f96d271d6155d43be78cc5225eef" 2025-08-14T21:23:49.0549925Z }, 2025-08-14T21:23:49.0550074Z { 2025-08-14T21:23:49.0550337Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0550647Z "size": 313280151, 2025-08-14T21:23:49.0550958Z "digest": "sha256:e5a28988c8932eb5797557621582a064ce48651dbb5eaed379e9978535daccb9" 2025-08-14T21:23:49.0551294Z }, 2025-08-14T21:23:49.0551447Z { 2025-08-14T21:23:49.0551691Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0552000Z "size": 793, 2025-08-14T21:23:49.0552321Z "digest": "sha256:76a69b57b6837bef07dbc1b481cf28a62dfd7c7063219d9f6e0d0d63067653c7" 2025-08-14T21:23:49.0552660Z }, 2025-08-14T21:23:49.0552808Z { 2025-08-14T21:23:49.0553057Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0553364Z "size": 106, 2025-08-14T21:23:49.0553756Z "digest": "sha256:5c785dcb4cdbf1f2ceffe4d1d8e85d73225a56d0236e7ed6e36a95c836996052" 2025-08-14T21:23:49.0554108Z }, 2025-08-14T21:23:49.0554259Z { 2025-08-14T21:23:49.0554553Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0554940Z "size": 704, 2025-08-14T21:23:49.0555252Z "digest": "sha256:836ab08052e8eb2bae68e69ae086fd23a5f04a8491c320718ab47f84f03aebb1" 2025-08-14T21:23:49.0555588Z }, 2025-08-14T21:23:49.0555741Z { 2025-08-14T21:23:49.0555992Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0556289Z "size": 1217, 2025-08-14T21:23:49.0556612Z "digest": "sha256:53b11c77468cbefca210560f7d8be8e58f9eeb415e096ab0c3fb0277f0b41caf" 2025-08-14T21:23:49.0556964Z }, 2025-08-14T21:23:49.0557112Z { 2025-08-14T21:23:49.0557359Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0557664Z "size": 485, 2025-08-14T21:23:49.0557968Z "digest": "sha256:e97311a6a967664cbe10c5027a1ec60c514caa9a1160167d8363088fd1f9fe09" 2025-08-14T21:23:49.0558305Z }, 2025-08-14T21:23:49.0558458Z { 2025-08-14T21:23:49.0558707Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0559290Z "size": 110343699, 2025-08-14T21:23:49.0559613Z "digest": "sha256:2c414689d31dc46a22fe02d4f43699f528cc1c02fb505824768383fa0bbf1c74" 2025-08-14T21:23:49.0559957Z }, 2025-08-14T21:23:49.0560107Z { 2025-08-14T21:23:49.0560362Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0560677Z "size": 4817, 2025-08-14T21:23:49.0560995Z "digest": "sha256:6d89b5f065d59e4abcaa9b5ff3bf0afded2394d493d2df0f7babf7154f7548e0" 2025-08-14T21:23:49.0561570Z }, 2025-08-14T21:23:49.0561734Z { 2025-08-14T21:23:49.0561995Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0562297Z "size": 1709, 2025-08-14T21:23:49.0562628Z "digest": "sha256:5a5cc76ada432cccf7d18e0eb79379afb95deaaa7afec482406267924d291ae4" 2025-08-14T21:23:49.0563191Z }, 2025-08-14T21:23:49.0563400Z { 2025-08-14T21:23:49.0563802Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0564173Z "size": 724, 2025-08-14T21:23:49.0564494Z "digest": "sha256:fc6b37d40530f2c5339430321eab67ae1e2e87e997587c7bc8c41504464208f9" 2025-08-14T21:23:49.0564825Z }, 2025-08-14T21:23:49.0564977Z { 2025-08-14T21:23:49.0565229Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0565528Z "size": 542, 2025-08-14T21:23:49.0565827Z "digest": "sha256:2e16579078600b91216fd14aca1e0ce0f9d1801b230689dd309980e8d2783935" 2025-08-14T21:23:49.0566160Z }, 2025-08-14T21:23:49.0566305Z { 2025-08-14T21:23:49.0566558Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0566867Z "size": 3397512507, 2025-08-14T21:23:49.0567181Z "digest": "sha256:7b92d7a4b8c766d7b7873aa33088e171fb44a8e968645e4b31dfe6de2968aead" 2025-08-14T21:23:49.0567519Z }, 2025-08-14T21:23:49.0567668Z { 2025-08-14T21:23:49.0567912Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0568206Z "size": 32, 2025-08-14T21:23:49.0568517Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:23:49.0568894Z }, 2025-08-14T21:23:49.0569076Z { 2025-08-14T21:23:49.0573547Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0573858Z "size": 380, 2025-08-14T21:23:49.0574156Z "digest": "sha256:d6226eb61f823984003d5ac28f4d66fec9b27baf5d54a9513286483f5912cd88" 2025-08-14T21:23:49.0574497Z }, 2025-08-14T21:23:49.0574651Z { 2025-08-14T21:23:49.0574896Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0575206Z "size": 234681, 2025-08-14T21:23:49.0575521Z "digest": "sha256:83c70f4266a6ee5f8f44a88d4cb951382f6c960323b8250046bddc080e62268b" 2025-08-14T21:23:49.0575869Z }, 2025-08-14T21:23:49.0576018Z { 2025-08-14T21:23:49.0576266Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0576571Z "size": 231, 2025-08-14T21:23:49.0576865Z "digest": "sha256:60c725d21861c24c417efe3a5474414ba04f0f49c78c6d6451478ab9e45469ec" 2025-08-14T21:23:49.0577215Z }, 2025-08-14T21:23:49.0577367Z { 2025-08-14T21:23:49.0577605Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0577916Z "size": 4464546, 2025-08-14T21:23:49.0578234Z "digest": "sha256:a504e76e66a49926b4ea837b7a7ff3c842a27b2caaa4d80cf5057a1e55293666" 2025-08-14T21:23:49.0578574Z }, 2025-08-14T21:23:49.0578732Z { 2025-08-14T21:23:49.0578981Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0579286Z "size": 1864, 2025-08-14T21:23:49.0579609Z "digest": "sha256:fc1c200a4f77face2af0146f9b03ad04f31fe06fec216473ffd2ebd538cde056" 2025-08-14T21:23:49.0579961Z }, 2025-08-14T21:23:49.0580111Z { 2025-08-14T21:23:49.0580350Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0580653Z "size": 475, 2025-08-14T21:23:49.0580955Z "digest": "sha256:43273c22704f81f162741d2039015f745273eee1d1fdec47be35c9b2a90dcc5b" 2025-08-14T21:23:49.0583139Z }, 2025-08-14T21:23:49.0583297Z { 2025-08-14T21:23:49.0583620Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0584016Z "size": 178, 2025-08-14T21:23:49.0584331Z "digest": "sha256:89df389d042adbd7621a94d36b6e3db60ff6c559efb95c6fcc11b8afd42f0599" 2025-08-14T21:23:49.0584681Z }, 2025-08-14T21:23:49.0584826Z { 2025-08-14T21:23:49.0585074Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0585444Z "size": 586, 2025-08-14T21:23:49.0585735Z "digest": "sha256:684349f50d9456597026ee5c1bd890c51d1e498614f367adf03329c5227add79" 2025-08-14T21:23:49.0586068Z }, 2025-08-14T21:23:49.0586218Z { 2025-08-14T21:23:49.0586463Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0586766Z "size": 218, 2025-08-14T21:23:49.0587077Z "digest": "sha256:21d0eae87fb3ac753b3f0e91ae638360d23922d4cd119410a5a1b97bbe0ca435" 2025-08-14T21:23:49.0587423Z }, 2025-08-14T21:23:49.0587575Z { 2025-08-14T21:23:49.0587819Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0588122Z "size": 802, 2025-08-14T21:23:49.0588423Z "digest": "sha256:c9c2b424b8e08d943dc259a3796d66eede3a1e93a6460df5db132c0036d3d6af" 2025-08-14T21:23:49.0588769Z }, 2025-08-14T21:23:49.0588918Z { 2025-08-14T21:23:49.0589155Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0589459Z "size": 32, 2025-08-14T21:23:49.0589771Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:23:49.0590136Z }, 2025-08-14T21:23:49.0590288Z { 2025-08-14T21:23:49.0590527Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0590832Z "size": 104, 2025-08-14T21:23:49.0591139Z "digest": "sha256:98dda28f339592e3ca6d589d551e69b8314f2b7fc2a1544eacc1b3c2d3378521" 2025-08-14T21:23:49.0591472Z }, 2025-08-14T21:23:49.0591623Z { 2025-08-14T21:23:49.0591869Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0592170Z "size": 1496, 2025-08-14T21:23:49.0592484Z "digest": "sha256:acf5babd87f23aa905883eb434073e9a00ff41679134f2f4827dd86949f5a9d9" 2025-08-14T21:23:49.0592833Z }, 2025-08-14T21:23:49.0592977Z { 2025-08-14T21:23:49.0593223Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0593529Z "size": 453555614, 2025-08-14T21:23:49.0593857Z "digest": "sha256:7c5050d8408d3c4f9f5e8f2cb215245473bfc2f1510fe5ee01c2a6c505068b5a" 2025-08-14T21:23:49.0594197Z }, 2025-08-14T21:23:49.0594344Z { 2025-08-14T21:23:49.0594587Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0594885Z "size": 163, 2025-08-14T21:23:49.0595195Z "digest": "sha256:7ddd14e2b548b9ae6e216a081bb20116434aacbbe571c99b40e60fb2fde22a2a" 2025-08-14T21:23:49.0595542Z }, 2025-08-14T21:23:49.0595686Z { 2025-08-14T21:23:49.0595929Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0596239Z "size": 347, 2025-08-14T21:23:49.0596541Z "digest": "sha256:4ba8e7a736c8199931fd7ff9931a5f17b7b931d0383a3e158f1b12b191a1d250" 2025-08-14T21:23:49.0596884Z }, 2025-08-14T21:23:49.0597035Z { 2025-08-14T21:23:49.0597273Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0597582Z "size": 32, 2025-08-14T21:23:49.0597933Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:23:49.0602539Z }, 2025-08-14T21:23:49.0602692Z { 2025-08-14T21:23:49.0602942Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0603253Z "size": 106, 2025-08-14T21:23:49.0603559Z "digest": "sha256:907c320fee2f90da0cf5028c90a0ef49a137518baf79b483dcf7f22d5a0a497d" 2025-08-14T21:23:49.0603912Z }, 2025-08-14T21:23:49.0604070Z { 2025-08-14T21:23:49.0604310Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0604689Z "size": 425, 2025-08-14T21:23:49.0605008Z "digest": "sha256:18c4ed1ec491095788e352ae018afd84de0f251fbcfb8f74d5d893e1e9ab196d" 2025-08-14T21:23:49.0605350Z }, 2025-08-14T21:23:49.0605505Z { 2025-08-14T21:23:49.0605760Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0606063Z "size": 19308711, 2025-08-14T21:23:49.0606391Z "digest": "sha256:d7618c2df6cdb4bbf3d9870ba2d089094ac46c429b573d9adb94411fac54cfca" 2025-08-14T21:23:49.0606743Z }, 2025-08-14T21:23:49.0606955Z { 2025-08-14T21:23:49.0607196Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0607504Z "size": 108, 2025-08-14T21:23:49.0607810Z "digest": "sha256:b7bdd9a6f789ba483a46c92e5d373638850f33e88b1baa4bbe67e1c6a09cb7d0" 2025-08-14T21:23:49.0608149Z }, 2025-08-14T21:23:49.0608304Z { 2025-08-14T21:23:49.0608548Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0608848Z "size": 691, 2025-08-14T21:23:49.0609168Z "digest": "sha256:6738ba83282e002d92bff3d2b4951e3c1a67f5ec2c1bad2fd780c2f5d444748f" 2025-08-14T21:23:49.0609513Z }, 2025-08-14T21:23:49.0609654Z { 2025-08-14T21:23:49.0609898Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0610198Z "size": 724, 2025-08-14T21:23:49.0610499Z "digest": "sha256:fc6b37d40530f2c5339430321eab67ae1e2e87e997587c7bc8c41504464208f9" 2025-08-14T21:23:49.0610832Z }, 2025-08-14T21:23:49.0610983Z { 2025-08-14T21:23:49.0611230Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0611525Z "size": 116, 2025-08-14T21:23:49.0611826Z "digest": "sha256:dfb0f24886393e1d394f1f433dc9346026679dafd7a60c3a93de17d94078c1ca" 2025-08-14T21:23:49.0612162Z }, 2025-08-14T21:23:49.0612303Z { 2025-08-14T21:23:49.0612603Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0612990Z "size": 136, 2025-08-14T21:23:49.0613290Z "digest": "sha256:dc833b0762f2e144670a660f6b7ce62cec71a5fdd24df4e67b5c6173d5834451" 2025-08-14T21:23:49.0613639Z }, 2025-08-14T21:23:49.0613794Z { 2025-08-14T21:23:49.0614034Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0614342Z "size": 139, 2025-08-14T21:23:49.0614653Z "digest": "sha256:8827df8ca2da347e0032d1bff3b0312437f711c5d0b5f2164f8a60c3368a9827" 2025-08-14T21:23:49.0614997Z }, 2025-08-14T21:23:49.0615144Z { 2025-08-14T21:23:49.0615397Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0615707Z "size": 17672683360, 2025-08-14T21:23:49.0616032Z "digest": "sha256:fac8f3bd0f85eaffb43df539683dc3d861c370e583623253559fd7a1f5b00229" 2025-08-14T21:23:49.0616383Z }, 2025-08-14T21:23:49.0616536Z { 2025-08-14T21:23:49.0616777Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0617083Z "size": 214, 2025-08-14T21:23:49.0617393Z "digest": "sha256:d7cf7f140df32761610e1d58686db7f7c66a85affa4bb4b9d3c245e232443a8f" 2025-08-14T21:23:49.0617734Z }, 2025-08-14T21:23:49.0617895Z { 2025-08-14T21:23:49.0618142Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0618447Z "size": 272992162, 2025-08-14T21:23:49.0618774Z "digest": "sha256:733eedc8da8d8e7bd5a85a58d3d7818f14ed9a4fdf2dbd587038bb7725fbb9f7" 2025-08-14T21:23:49.0619124Z }, 2025-08-14T21:23:49.0619277Z { 2025-08-14T21:23:49.0619519Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0619832Z "size": 6435582332, 2025-08-14T21:23:49.0620158Z "digest": "sha256:5b092eb06909a2ea8906849acac588a10864da349670d65c0bfea342187edba2" 2025-08-14T21:23:49.0620495Z }, 2025-08-14T21:23:49.0620649Z { 2025-08-14T21:23:49.0620896Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0621196Z "size": 129, 2025-08-14T21:23:49.0621495Z "digest": "sha256:bc596103109216e154006085503386753b0b114b5900bf44758cdff324df5504" 2025-08-14T21:23:49.0621911Z }, 2025-08-14T21:23:49.0622060Z { 2025-08-14T21:23:49.0622310Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0622619Z "size": 776, 2025-08-14T21:23:49.0622935Z "digest": "sha256:0531cc34c12ab9127f1858c4cf365bb3a02bc31e8d6df5eabba2e1b6ef026ccf" 2025-08-14T21:23:49.0623275Z }, 2025-08-14T21:23:49.0623428Z { 2025-08-14T21:23:49.0623673Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0624056Z "size": 724, 2025-08-14T21:23:49.0624366Z "digest": "sha256:fc6b37d40530f2c5339430321eab67ae1e2e87e997587c7bc8c41504464208f9" 2025-08-14T21:23:49.0624703Z }, 2025-08-14T21:23:49.0624848Z { 2025-08-14T21:23:49.0625089Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0625389Z "size": 141, 2025-08-14T21:23:49.0625680Z "digest": "sha256:38c303d3b62eb463762816db04062a480014a6f3c9754386f3e83ba331ab4d1d" 2025-08-14T21:23:49.0626016Z }, 2025-08-14T21:23:49.0626168Z { 2025-08-14T21:23:49.0626413Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0626719Z "size": 32, 2025-08-14T21:23:49.0627090Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:23:49.0635894Z }, 2025-08-14T21:23:49.0636051Z { 2025-08-14T21:23:49.0636335Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0636716Z "size": 160, 2025-08-14T21:23:49.0637096Z "digest": "sha256:e06d15594a2a76995baebbce7032946ff9f94e281246fbc3f8ab19d8bcc38b81" 2025-08-14T21:23:49.0637546Z }, 2025-08-14T21:23:49.0637708Z { 2025-08-14T21:23:49.0637986Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0638372Z "size": 1010, 2025-08-14T21:23:49.0638692Z "digest": "sha256:0e55deb5cb38fd36b600183f7d86eaca0dabc04d2ff4d49ec2266ee3329edc4a" 2025-08-14T21:23:49.0639034Z }, 2025-08-14T21:23:49.0639193Z { 2025-08-14T21:23:49.0639440Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0639740Z "size": 724, 2025-08-14T21:23:49.0640043Z "digest": "sha256:fc6b37d40530f2c5339430321eab67ae1e2e87e997587c7bc8c41504464208f9" 2025-08-14T21:23:49.0640390Z }, 2025-08-14T21:23:49.0640546Z { 2025-08-14T21:23:49.0640784Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0641164Z "size": 134, 2025-08-14T21:23:49.0641535Z "digest": "sha256:4a53d66dce071bb7416414aa1adbc3e4a59003300c0d42038612fabdeb5a1b01" 2025-08-14T21:23:49.0641946Z }, 2025-08-14T21:23:49.0642102Z { 2025-08-14T21:23:49.0642352Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0642727Z "size": 32, 2025-08-14T21:23:49.0643147Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:23:49.0643723Z }, 2025-08-14T21:23:49.0643955Z { 2025-08-14T21:23:49.0644289Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0644710Z "size": 159, 2025-08-14T21:23:49.0645123Z "digest": "sha256:1519daa051b8b80e04125f2f2215dc412dcdbb9502711925e97aeccbda069eaf" 2025-08-14T21:23:49.0645519Z }, 2025-08-14T21:23:49.0645785Z { 2025-08-14T21:23:49.0646131Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0646487Z "size": 1371, 2025-08-14T21:23:49.0646907Z "digest": "sha256:381ed91d2119f078fbba19102a65befc4cb242f8cf47a11fb6f76ea424690692" 2025-08-14T21:23:49.0659654Z }, 2025-08-14T21:23:49.0659878Z { 2025-08-14T21:23:49.0660227Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0660565Z "size": 32, 2025-08-14T21:23:49.0660894Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:23:49.0661252Z }, 2025-08-14T21:23:49.0661416Z { 2025-08-14T21:23:49.0661672Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0661986Z "size": 137, 2025-08-14T21:23:49.0662478Z "digest": "sha256:c6b0a01a96dd479640297d4b012031ffc1bd9fc0daf61d86058f9b675c0a0705" 2025-08-14T21:23:49.0662826Z }, 2025-08-14T21:23:49.0662991Z { 2025-08-14T21:23:49.0663251Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0663566Z "size": 380, 2025-08-14T21:23:49.0663881Z "digest": "sha256:62df6413daeefebde04dcc401134734952e4ea37fc85ff23c89cb9b4fbd45155" 2025-08-14T21:23:49.0664295Z }, 2025-08-14T21:23:49.0664454Z { 2025-08-14T21:23:49.0664774Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0665086Z "size": 32, 2025-08-14T21:23:49.0665394Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:23:49.0665734Z }, 2025-08-14T21:23:49.0665886Z { 2025-08-14T21:23:49.0666128Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0666423Z "size": 104, 2025-08-14T21:23:49.0666731Z "digest": "sha256:7a18bc2a6881b76a6f591c98dafb47e44d903f7a905f7eba0fc3aedb5c90fff7" 2025-08-14T21:23:49.0667082Z }, 2025-08-14T21:23:49.0667224Z { 2025-08-14T21:23:49.0667469Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0667774Z "size": 407, 2025-08-14T21:23:49.0668078Z "digest": "sha256:93359cd58a8cece344fd4291b27647e57761c9399bb54bb0c18149c12af5f66a" 2025-08-14T21:23:49.0668410Z }, 2025-08-14T21:23:49.0668557Z { 2025-08-14T21:23:49.0668804Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0669109Z "size": 32, 2025-08-14T21:23:49.0669421Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:23:49.0669764Z }, 2025-08-14T21:23:49.0669910Z { 2025-08-14T21:23:49.0670153Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0670508Z "size": 109, 2025-08-14T21:23:49.0677122Z "digest": "sha256:c35ba0a1f353d6894c914a4bfbea9a2c9b8ac1b526af64d34cbe9a12bd83c78e" 2025-08-14T21:23:49.0677485Z }, 2025-08-14T21:23:49.0677646Z { 2025-08-14T21:23:49.0677887Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0678192Z "size": 1896, 2025-08-14T21:23:49.0678504Z "digest": "sha256:dcf1e01c98d6a6f72674d79a4e8e4047b54796576cd06ad682c225a92820a8f5" 2025-08-14T21:23:49.0678847Z }, 2025-08-14T21:23:49.0678992Z { 2025-08-14T21:23:49.0679243Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0679552Z "size": 242635753, 2025-08-14T21:23:49.0679873Z "digest": "sha256:bad0564f61fdf377e3ae31f6fec0ec28b6922da0b9db28408b55b8e97ff1ea51" 2025-08-14T21:23:49.0680225Z }, 2025-08-14T21:23:49.0680378Z { 2025-08-14T21:23:49.0680617Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0680920Z "size": 106, 2025-08-14T21:23:49.0681314Z "digest": "sha256:539ded9057364aade7abe23ab908d2caf53966a186734aa58ae84a56bee659eb" 2025-08-14T21:23:49.0681657Z }, 2025-08-14T21:23:49.0681810Z { 2025-08-14T21:23:49.0682054Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0682358Z "size": 163, 2025-08-14T21:23:49.0682646Z "digest": "sha256:28d482062637d32514edfc447913e98745d7c13d2f277531e64ffcf090ae6d92" 2025-08-14T21:23:49.0682977Z }, 2025-08-14T21:23:49.0683130Z { 2025-08-14T21:23:49.0683372Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0683679Z "size": 7943, 2025-08-14T21:23:49.0683992Z "digest": "sha256:3245316ff51b50b27da4ef7279733c92f76cc652b3fce3877c0e3d510430e8b3" 2025-08-14T21:23:49.0684324Z }, 2025-08-14T21:23:49.0684478Z { 2025-08-14T21:23:49.0684726Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0685079Z "size": 8073, 2025-08-14T21:23:49.0685456Z "digest": "sha256:b53167d1a6df0e4b67d637d073150dff1fb87a823864c0c98d77c15e56babc24" 2025-08-14T21:23:49.0685796Z }, 2025-08-14T21:23:49.0686018Z { 2025-08-14T21:23:49.0686262Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0686566Z "size": 303, 2025-08-14T21:23:49.0686870Z "digest": "sha256:7f5277f691672469f431fd90a8c2bb702c6c68333f6be2cff868f00e416c5a1a" 2025-08-14T21:23:49.0687204Z }, 2025-08-14T21:23:49.0687358Z { 2025-08-14T21:23:49.0687608Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0687906Z "size": 32, 2025-08-14T21:23:49.0688273Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:23:49.0688622Z }, 2025-08-14T21:23:49.0688769Z { 2025-08-14T21:23:49.0689015Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0689322Z "size": 108, 2025-08-14T21:23:49.0689620Z "digest": "sha256:23dff10cdaa5b1e9c7250f0c58a6279f104b35408281e951bfe9983f97e3d9ed" 2025-08-14T21:23:49.0689963Z }, 2025-08-14T21:23:49.0690119Z { 2025-08-14T21:23:49.0690356Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0690674Z "size": 54145699, 2025-08-14T21:23:49.0690999Z "digest": "sha256:9fb73296da6ac15f37f36663bd10afc98abb8a01fb40bff4848de7247d28e018" 2025-08-14T21:23:49.0691349Z }, 2025-08-14T21:23:49.0691492Z { 2025-08-14T21:23:49.0691737Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:23:49.0692036Z "size": 32, 2025-08-14T21:23:49.0692345Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:23:49.0692691Z } 2025-08-14T21:23:49.0692846Z ] 2025-08-14T21:23:49.0692993Z } 2025-08-14T21:23:49.0693162Z + exit 0 2025-08-14T21:23:49.0719827Z ##[group]Run set -eux 2025-08-14T21:23:49.0720056Z set -eux 2025-08-14T21:23:49.0720706Z aws secretsmanager get-secret-value --secret-id docker_hub_readonly_token | jq --raw-output '.SecretString' | jq -r .docker_hub_readonly_token | docker login --username pytorchbot --password-stdin 2025-08-14T21:23:49.0734194Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:49.0734484Z env: 2025-08-14T21:23:49.0734661Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:49.0734872Z ##[endgroup] 2025-08-14T21:23:49.0762298Z + aws secretsmanager get-secret-value --secret-id docker_hub_readonly_token 2025-08-14T21:23:49.0762743Z + jq -r .docker_hub_readonly_token 2025-08-14T21:23:49.0763328Z + docker login --username pytorchbot --password-stdin 2025-08-14T21:23:49.0763970Z + jq --raw-output .SecretString 2025-08-14T21:23:49.6845183Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-08-14T21:23:49.6845673Z Configure a credential helper to remove this warning. See 2025-08-14T21:23:49.6846120Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-08-14T21:23:49.6846427Z 2025-08-14T21:23:49.6846516Z Login Succeeded 2025-08-14T21:23:49.6932088Z ##[group]Run tag=${ECR_DOCKER_IMAGE##*:} 2025-08-14T21:23:49.6932388Z tag=${ECR_DOCKER_IMAGE##*:} 2025-08-14T21:23:49.6932719Z echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" 2025-08-14T21:23:49.6946442Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:49.6946733Z env: 2025-08-14T21:23:49.6946922Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:49.6947581Z ECR_DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:49.6948238Z ##[endgroup] 2025-08-14T21:23:49.6976564Z docker pull ghcr.io/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:49.7022555Z ##[group]Run pytorch/test-infra/.github/actions/pull-docker-image@main 2025-08-14T21:23:49.7022895Z with: 2025-08-14T21:23:49.7023516Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:49.7024364Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:23:49.7024661Z env: 2025-08-14T21:23:49.7024836Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:49.7025033Z ##[endgroup] 2025-08-14T21:23:49.7213620Z ##[group]Run set -x 2025-08-14T21:23:49.7213837Z set -x 2025-08-14T21:23:49.7214018Z set +e 2025-08-14T21:23:49.7214197Z  2025-08-14T21:23:49.7214356Z login() { 2025-08-14T21:23:49.7214716Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2025-08-14T21:23:49.7215097Z } 2025-08-14T21:23:49.7215262Z  2025-08-14T21:23:49.7215458Z retry () { 2025-08-14T21:23:49.7215666Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2025-08-14T21:23:49.7215910Z } 2025-08-14T21:23:49.7216074Z  2025-08-14T21:23:49.7216259Z retry login "${DOCKER_REGISTRY}" 2025-08-14T21:23:49.7216490Z  2025-08-14T21:23:49.7216867Z IMAGE_SIZE=$(docker manifest inspect "${DOCKER_IMAGE}" | jq '[.layers[].size, .config.size] | add / 1024 / 1024') 2025-08-14T21:23:49.7217351Z echo "Compressed size of image in MB: ${IMAGE_SIZE}" 2025-08-14T21:23:49.7217623Z  2025-08-14T21:23:49.7217789Z set -e 2025-08-14T21:23:49.7218056Z # ignore output since only exit code is used for conditional 2025-08-14T21:23:49.7218426Z # only pull docker image if it's not available locally 2025-08-14T21:23:49.7218827Z if ! docker inspect --type=image "${DOCKER_IMAGE}" >/dev/null 2>/dev/null; then 2025-08-14T21:23:49.7219211Z  retry docker pull "${DOCKER_IMAGE}" 2025-08-14T21:23:49.7219453Z fi 2025-08-14T21:23:49.7224084Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:49.7224380Z env: 2025-08-14T21:23:49.7224595Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:49.7229536Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:49.7230284Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:23:49.7230591Z ##[endgroup] 2025-08-14T21:23:49.7262714Z + set +e 2025-08-14T21:23:49.7263014Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:23:49.7263438Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:23:49.7263955Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:23:49.7264429Z + aws ecr get-login-password --region us-east-1 2025-08-14T21:23:50.2758313Z Login Succeeded 2025-08-14T21:23:50.2758856Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-08-14T21:23:50.2759310Z Configure a credential helper to remove this warning. See 2025-08-14T21:23:50.2759751Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-08-14T21:23:50.2760063Z 2025-08-14T21:23:50.2776976Z ++ docker manifest inspect 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:50.2777732Z ++ jq '[.layers[].size, .config.size] | add / 1024 / 1024' 2025-08-14T21:23:50.5901721Z + IMAGE_SIZE=27663.483686447144 2025-08-14T21:23:50.5902092Z + echo 'Compressed size of image in MB: 27663.483686447144' 2025-08-14T21:23:50.5902525Z + set -e 2025-08-14T21:23:50.5902747Z Compressed size of image in MB: 27663.483686447144 2025-08-14T21:23:50.5903773Z + docker inspect --type=image 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:50.6023823Z + retry docker pull 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:50.6025141Z + docker pull 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:50.8343872Z pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe: Pulling from pytorch/ci-image 2025-08-14T21:23:50.8344805Z 660ffc76f83b: Pulling fs layer 2025-08-14T21:23:50.8345151Z c7b4a852a455: Pulling fs layer 2025-08-14T21:23:50.8345463Z e5a28988c893: Pulling fs layer 2025-08-14T21:23:50.8345682Z 76a69b57b683: Pulling fs layer 2025-08-14T21:23:50.8345955Z 5c785dcb4cdb: Pulling fs layer 2025-08-14T21:23:50.8346174Z 836ab08052e8: Pulling fs layer 2025-08-14T21:23:50.8346514Z 53b11c77468c: Pulling fs layer 2025-08-14T21:23:50.8346732Z e97311a6a967: Pulling fs layer 2025-08-14T21:23:50.8346959Z 2c414689d31d: Pulling fs layer 2025-08-14T21:23:50.8347178Z 6d89b5f065d5: Pulling fs layer 2025-08-14T21:23:50.8347387Z 5a5cc76ada43: Pulling fs layer 2025-08-14T21:23:50.8347663Z fc6b37d40530: Pulling fs layer 2025-08-14T21:23:50.8347875Z 2e1657907860: Pulling fs layer 2025-08-14T21:23:50.8348080Z 7b92d7a4b8c7: Pulling fs layer 2025-08-14T21:23:50.8348294Z 4f4fb700ef54: Pulling fs layer 2025-08-14T21:23:50.8348509Z d6226eb61f82: Pulling fs layer 2025-08-14T21:23:50.8349022Z 83c70f4266a6: Pulling fs layer 2025-08-14T21:23:50.8349251Z 60c725d21861: Pulling fs layer 2025-08-14T21:23:50.8349469Z a504e76e66a4: Pulling fs layer 2025-08-14T21:23:50.8349685Z fc1c200a4f77: Pulling fs layer 2025-08-14T21:23:50.8349919Z 43273c22704f: Pulling fs layer 2025-08-14T21:23:50.8350139Z 89df389d042a: Pulling fs layer 2025-08-14T21:23:50.8350352Z 684349f50d94: Pulling fs layer 2025-08-14T21:23:50.8350578Z 21d0eae87fb3: Pulling fs layer 2025-08-14T21:23:50.8350819Z c9c2b424b8e0: Pulling fs layer 2025-08-14T21:23:50.8351113Z 53b11c77468c: Waiting 2025-08-14T21:23:50.8351446Z 98dda28f3395: Pulling fs layer 2025-08-14T21:23:50.8351671Z acf5babd87f2: Pulling fs layer 2025-08-14T21:23:50.8351885Z 7c5050d8408d: Pulling fs layer 2025-08-14T21:23:50.8352101Z 7ddd14e2b548: Pulling fs layer 2025-08-14T21:23:50.8355718Z 4ba8e7a736c8: Pulling fs layer 2025-08-14T21:23:50.8356041Z 907c320fee2f: Pulling fs layer 2025-08-14T21:23:50.8356313Z 18c4ed1ec491: Pulling fs layer 2025-08-14T21:23:50.8356567Z d7618c2df6cd: Pulling fs layer 2025-08-14T21:23:50.8356819Z b7bdd9a6f789: Pulling fs layer 2025-08-14T21:23:50.8357061Z 5a5cc76ada43: Waiting 2025-08-14T21:23:50.8357259Z 6738ba83282e: Pulling fs layer 2025-08-14T21:23:50.8357558Z dfb0f2488639: Pulling fs layer 2025-08-14T21:23:50.8357776Z dc833b0762f2: Pulling fs layer 2025-08-14T21:23:50.8357985Z 8827df8ca2da: Pulling fs layer 2025-08-14T21:23:50.8358282Z fac8f3bd0f85: Pulling fs layer 2025-08-14T21:23:50.8358540Z d7cf7f140df3: Pulling fs layer 2025-08-14T21:23:50.8358750Z 733eedc8da8d: Pulling fs layer 2025-08-14T21:23:50.8358961Z 5b092eb06909: Pulling fs layer 2025-08-14T21:23:50.8359171Z bc5961031092: Pulling fs layer 2025-08-14T21:23:50.8359373Z 0531cc34c12a: Pulling fs layer 2025-08-14T21:23:50.8359593Z 38c303d3b62e: Pulling fs layer 2025-08-14T21:23:50.8359837Z 76a69b57b683: Waiting 2025-08-14T21:23:50.8360043Z e06d15594a2a: Pulling fs layer 2025-08-14T21:23:50.8360249Z 4f4fb700ef54: Waiting 2025-08-14T21:23:50.8360604Z 7b92d7a4b8c7: Waiting 2025-08-14T21:23:50.8360796Z e97311a6a967: Waiting 2025-08-14T21:23:50.8361032Z 43273c22704f: Waiting 2025-08-14T21:23:50.8361307Z 836ab08052e8: Waiting 2025-08-14T21:23:50.8361491Z 21d0eae87fb3: Waiting 2025-08-14T21:23:50.8361686Z 0e55deb5cb38: Pulling fs layer 2025-08-14T21:23:50.8361888Z 5c785dcb4cdb: Waiting 2025-08-14T21:23:50.8362078Z 684349f50d94: Waiting 2025-08-14T21:23:50.8362260Z 89df389d042a: Waiting 2025-08-14T21:23:50.8362725Z d6226eb61f82: Waiting 2025-08-14T21:23:50.8362915Z 7ddd14e2b548: Waiting 2025-08-14T21:23:50.8363104Z 7c5050d8408d: Waiting 2025-08-14T21:23:50.8363282Z b7bdd9a6f789: Waiting 2025-08-14T21:23:50.8363473Z 98dda28f3395: Waiting 2025-08-14T21:23:50.8363673Z 4a53d66dce07: Pulling fs layer 2025-08-14T21:23:50.8363991Z 2c414689d31d: Waiting 2025-08-14T21:23:50.8364175Z dfb0f2488639: Waiting 2025-08-14T21:23:50.8364360Z fac8f3bd0f85: Waiting 2025-08-14T21:23:50.8364539Z 6d89b5f065d5: Waiting 2025-08-14T21:23:50.8364723Z fc1c200a4f77: Waiting 2025-08-14T21:23:50.8364914Z 907c320fee2f: Waiting 2025-08-14T21:23:50.8365098Z acf5babd87f2: Waiting 2025-08-14T21:23:50.8365300Z 1519daa051b8: Pulling fs layer 2025-08-14T21:23:50.8365521Z 8827df8ca2da: Waiting 2025-08-14T21:23:50.8365704Z 0531cc34c12a: Waiting 2025-08-14T21:23:50.8365892Z 2e1657907860: Waiting 2025-08-14T21:23:50.8366080Z 4ba8e7a736c8: Waiting 2025-08-14T21:23:50.8366254Z fc6b37d40530: Waiting 2025-08-14T21:23:50.8366443Z d7618c2df6cd: Waiting 2025-08-14T21:23:50.8366627Z c9c2b424b8e0: Waiting 2025-08-14T21:23:50.8366813Z 381ed91d2119: Pulling fs layer 2025-08-14T21:23:50.8367016Z 6738ba83282e: Waiting 2025-08-14T21:23:50.8367195Z 18c4ed1ec491: Waiting 2025-08-14T21:23:50.8367379Z 5b092eb06909: Waiting 2025-08-14T21:23:50.8367550Z 38c303d3b62e: Waiting 2025-08-14T21:23:50.8367734Z 1519daa051b8: Waiting 2025-08-14T21:23:50.8367923Z c6b0a01a96dd: Pulling fs layer 2025-08-14T21:23:50.8368121Z e06d15594a2a: Waiting 2025-08-14T21:23:50.8368301Z 381ed91d2119: Waiting 2025-08-14T21:23:50.8368479Z bc5961031092: Waiting 2025-08-14T21:23:50.8368662Z 62df6413daee: Pulling fs layer 2025-08-14T21:23:50.8369029Z c6b0a01a96dd: Waiting 2025-08-14T21:23:50.8369212Z 62df6413daee: Waiting 2025-08-14T21:23:50.8369398Z 7a18bc2a6881: Pulling fs layer 2025-08-14T21:23:50.8369619Z 93359cd58a8c: Pulling fs layer 2025-08-14T21:23:50.8369823Z 0e55deb5cb38: Waiting 2025-08-14T21:23:50.8370017Z c35ba0a1f353: Pulling fs layer 2025-08-14T21:23:50.8370245Z dcf1e01c98d6: Pulling fs layer 2025-08-14T21:23:50.8370453Z 7a18bc2a6881: Waiting 2025-08-14T21:23:50.8370639Z c35ba0a1f353: Waiting 2025-08-14T21:23:50.8370828Z 93359cd58a8c: Waiting 2025-08-14T21:23:50.8371022Z bad0564f61fd: Pulling fs layer 2025-08-14T21:23:50.8371229Z 539ded905736: Pulling fs layer 2025-08-14T21:23:50.8371443Z 28d482062637: Pulling fs layer 2025-08-14T21:23:50.8371656Z dcf1e01c98d6: Waiting 2025-08-14T21:23:50.8371840Z 28d482062637: Waiting 2025-08-14T21:23:50.8372016Z 539ded905736: Waiting 2025-08-14T21:23:50.8372203Z bad0564f61fd: Waiting 2025-08-14T21:23:50.8372394Z 3245316ff51b: Pulling fs layer 2025-08-14T21:23:50.8372593Z 60c725d21861: Waiting 2025-08-14T21:23:50.8372785Z b53167d1a6df: Pulling fs layer 2025-08-14T21:23:50.8372994Z d7cf7f140df3: Waiting 2025-08-14T21:23:50.8373185Z 7f5277f69167: Pulling fs layer 2025-08-14T21:23:50.8373400Z 23dff10cdaa5: Pulling fs layer 2025-08-14T21:23:50.8373622Z 9fb73296da6a: Pulling fs layer 2025-08-14T21:23:50.8373823Z 3245316ff51b: Waiting 2025-08-14T21:23:50.8374014Z b53167d1a6df: Waiting 2025-08-14T21:23:50.8374202Z 23dff10cdaa5: Waiting 2025-08-14T21:23:50.8374378Z 9fb73296da6a: Waiting 2025-08-14T21:23:50.9171471Z c7b4a852a455: Verifying Checksum 2025-08-14T21:23:50.9171789Z c7b4a852a455: Download complete 2025-08-14T21:23:50.9963279Z 76a69b57b683: Verifying Checksum 2025-08-14T21:23:50.9963611Z 76a69b57b683: Download complete 2025-08-14T21:23:51.0685716Z 5c785dcb4cdb: Verifying Checksum 2025-08-14T21:23:51.0686023Z 5c785dcb4cdb: Download complete 2025-08-14T21:23:51.1542593Z 836ab08052e8: Download complete 2025-08-14T21:23:51.1923852Z 660ffc76f83b: Verifying Checksum 2025-08-14T21:23:51.1924147Z 660ffc76f83b: Download complete 2025-08-14T21:23:51.2502643Z 53b11c77468c: Verifying Checksum 2025-08-14T21:23:51.2502939Z 53b11c77468c: Download complete 2025-08-14T21:23:51.2775193Z e97311a6a967: Download complete 2025-08-14T21:23:51.3754208Z 6d89b5f065d5: Download complete 2025-08-14T21:23:51.4529445Z 5a5cc76ada43: Download complete 2025-08-14T21:23:51.5505739Z fc6b37d40530: Download complete 2025-08-14T21:23:51.6515982Z 2e1657907860: Verifying Checksum 2025-08-14T21:23:51.6516718Z 2e1657907860: Download complete 2025-08-14T21:23:52.1004878Z 660ffc76f83b: Pull complete 2025-08-14T21:23:52.1107692Z c7b4a852a455: Pull complete 2025-08-14T21:23:52.5164108Z 2c414689d31d: Verifying Checksum 2025-08-14T21:23:52.5164680Z 2c414689d31d: Download complete 2025-08-14T21:23:52.5272586Z 4f4fb700ef54: Verifying Checksum 2025-08-14T21:23:52.5272865Z 4f4fb700ef54: Download complete 2025-08-14T21:23:52.6222710Z d6226eb61f82: Download complete 2025-08-14T21:23:52.7466019Z 83c70f4266a6: Download complete 2025-08-14T21:23:52.8550969Z 60c725d21861: Verifying Checksum 2025-08-14T21:23:52.8551604Z 60c725d21861: Download complete 2025-08-14T21:23:52.9609676Z a504e76e66a4: Verifying Checksum 2025-08-14T21:23:52.9609991Z a504e76e66a4: Download complete 2025-08-14T21:23:53.0511104Z fc1c200a4f77: Verifying Checksum 2025-08-14T21:23:53.0511441Z fc1c200a4f77: Download complete 2025-08-14T21:23:53.1540744Z 43273c22704f: Verifying Checksum 2025-08-14T21:23:53.1541061Z 43273c22704f: Download complete 2025-08-14T21:23:53.2620688Z 89df389d042a: Verifying Checksum 2025-08-14T21:23:53.2621207Z 89df389d042a: Download complete 2025-08-14T21:23:53.3364374Z 684349f50d94: Verifying Checksum 2025-08-14T21:23:53.3364665Z 684349f50d94: Download complete 2025-08-14T21:23:53.4362328Z 21d0eae87fb3: Verifying Checksum 2025-08-14T21:23:53.4362655Z 21d0eae87fb3: Download complete 2025-08-14T21:23:53.5179807Z c9c2b424b8e0: Verifying Checksum 2025-08-14T21:23:53.5180618Z c9c2b424b8e0: Download complete 2025-08-14T21:23:53.5904264Z 98dda28f3395: Verifying Checksum 2025-08-14T21:23:53.5904736Z 98dda28f3395: Download complete 2025-08-14T21:23:53.6893757Z acf5babd87f2: Verifying Checksum 2025-08-14T21:23:53.6894220Z acf5babd87f2: Download complete 2025-08-14T21:23:54.0187517Z e5a28988c893: Verifying Checksum 2025-08-14T21:23:54.0187874Z e5a28988c893: Download complete 2025-08-14T21:23:54.1184797Z 7ddd14e2b548: Verifying Checksum 2025-08-14T21:23:54.1185184Z 7ddd14e2b548: Download complete 2025-08-14T21:23:54.2008475Z 4ba8e7a736c8: Verifying Checksum 2025-08-14T21:23:54.2008889Z 4ba8e7a736c8: Download complete 2025-08-14T21:23:54.3036020Z 907c320fee2f: Download complete 2025-08-14T21:23:54.4102812Z 18c4ed1ec491: Verifying Checksum 2025-08-14T21:23:54.4103558Z 18c4ed1ec491: Download complete 2025-08-14T21:23:54.6698957Z d7618c2df6cd: Verifying Checksum 2025-08-14T21:23:54.6699619Z d7618c2df6cd: Download complete 2025-08-14T21:23:54.8079657Z b7bdd9a6f789: Verifying Checksum 2025-08-14T21:23:54.8080059Z b7bdd9a6f789: Download complete 2025-08-14T21:23:54.8920256Z 6738ba83282e: Verifying Checksum 2025-08-14T21:23:54.8921507Z 6738ba83282e: Download complete 2025-08-14T21:23:55.0027959Z dfb0f2488639: Verifying Checksum 2025-08-14T21:23:55.0028294Z dfb0f2488639: Download complete 2025-08-14T21:23:55.1350507Z dc833b0762f2: Download complete 2025-08-14T21:23:55.2345881Z 8827df8ca2da: Verifying Checksum 2025-08-14T21:23:55.2346249Z 8827df8ca2da: Download complete 2025-08-14T21:23:58.2908146Z 7c5050d8408d: Verifying Checksum 2025-08-14T21:23:58.2908456Z 7c5050d8408d: Download complete 2025-08-14T21:23:58.5183979Z d7cf7f140df3: Verifying Checksum 2025-08-14T21:23:58.5184290Z d7cf7f140df3: Download complete 2025-08-14T21:24:01.3039755Z 733eedc8da8d: Verifying Checksum 2025-08-14T21:24:01.3040129Z 733eedc8da8d: Download complete 2025-08-14T21:24:01.8696263Z e5a28988c893: Pull complete 2025-08-14T21:24:01.9842720Z 76a69b57b683: Pull complete 2025-08-14T21:24:02.1383521Z 5c785dcb4cdb: Pull complete 2025-08-14T21:24:02.3137507Z 836ab08052e8: Pull complete 2025-08-14T21:24:02.4642778Z 53b11c77468c: Pull complete 2025-08-14T21:24:02.6113904Z e97311a6a967: Pull complete 2025-08-14T21:24:05.2005501Z 2c414689d31d: Pull complete 2025-08-14T21:24:05.3790530Z 6d89b5f065d5: Pull complete 2025-08-14T21:24:05.5475612Z 5a5cc76ada43: Pull complete 2025-08-14T21:24:05.7103986Z fc6b37d40530: Pull complete 2025-08-14T21:24:05.8826723Z 2e1657907860: Pull complete 2025-08-14T21:24:25.6849132Z 7b92d7a4b8c7: Verifying Checksum 2025-08-14T21:24:25.6849790Z 7b92d7a4b8c7: Download complete 2025-08-14T21:24:25.8011651Z bc5961031092: Download complete 2025-08-14T21:24:25.8846521Z 0531cc34c12a: Verifying Checksum 2025-08-14T21:24:25.8846830Z 0531cc34c12a: Download complete 2025-08-14T21:24:25.9789327Z 38c303d3b62e: Download complete 2025-08-14T21:24:26.0474484Z e06d15594a2a: Verifying Checksum 2025-08-14T21:24:26.0474804Z e06d15594a2a: Download complete 2025-08-14T21:24:26.1282867Z 0e55deb5cb38: Verifying Checksum 2025-08-14T21:24:26.1283177Z 0e55deb5cb38: Download complete 2025-08-14T21:24:26.2102530Z 4a53d66dce07: Verifying Checksum 2025-08-14T21:24:26.2102901Z 4a53d66dce07: Download complete 2025-08-14T21:24:26.2874178Z 1519daa051b8: Verifying Checksum 2025-08-14T21:24:26.2874645Z 1519daa051b8: Download complete 2025-08-14T21:24:26.3791289Z 381ed91d2119: Verifying Checksum 2025-08-14T21:24:26.3791621Z 381ed91d2119: Download complete 2025-08-14T21:24:26.4603186Z c6b0a01a96dd: Verifying Checksum 2025-08-14T21:24:26.4603514Z c6b0a01a96dd: Download complete 2025-08-14T21:24:26.5405553Z 62df6413daee: Verifying Checksum 2025-08-14T21:24:26.5406237Z 62df6413daee: Download complete 2025-08-14T21:24:26.6288443Z 7a18bc2a6881: Verifying Checksum 2025-08-14T21:24:26.6288800Z 7a18bc2a6881: Download complete 2025-08-14T21:24:26.7543468Z 93359cd58a8c: Download complete 2025-08-14T21:24:26.8519430Z c35ba0a1f353: Verifying Checksum 2025-08-14T21:24:26.8519864Z c35ba0a1f353: Download complete 2025-08-14T21:24:26.9172185Z dcf1e01c98d6: Verifying Checksum 2025-08-14T21:24:26.9172646Z dcf1e01c98d6: Download complete 2025-08-14T21:24:29.3917460Z bad0564f61fd: Verifying Checksum 2025-08-14T21:24:29.5461076Z 539ded905736: Download complete 2025-08-14T21:24:29.6261660Z 28d482062637: Verifying Checksum 2025-08-14T21:24:29.6261989Z 28d482062637: Download complete 2025-08-14T21:24:29.7308250Z 3245316ff51b: Verifying Checksum 2025-08-14T21:24:29.7308649Z 3245316ff51b: Download complete 2025-08-14T21:24:29.7981657Z b53167d1a6df: Verifying Checksum 2025-08-14T21:24:29.7981965Z b53167d1a6df: Download complete 2025-08-14T21:24:29.9002788Z 7f5277f69167: Verifying Checksum 2025-08-14T21:24:29.9003157Z 7f5277f69167: Download complete 2025-08-14T21:24:29.9956623Z 23dff10cdaa5: Verifying Checksum 2025-08-14T21:24:29.9956956Z 23dff10cdaa5: Download complete 2025-08-14T21:24:30.5913976Z 9fb73296da6a: Verifying Checksum 2025-08-14T21:24:30.5914318Z 9fb73296da6a: Download complete 2025-08-14T21:25:05.7497626Z 5b092eb06909: Verifying Checksum 2025-08-14T21:25:05.7497991Z 5b092eb06909: Download complete 2025-08-14T21:25:21.7921618Z 7b92d7a4b8c7: Pull complete 2025-08-14T21:25:21.9786339Z 4f4fb700ef54: Pull complete 2025-08-14T21:25:22.2027527Z d6226eb61f82: Pull complete 2025-08-14T21:25:22.4349982Z 83c70f4266a6: Pull complete 2025-08-14T21:25:22.6276495Z 60c725d21861: Pull complete 2025-08-14T21:25:22.8638732Z a504e76e66a4: Pull complete 2025-08-14T21:25:23.0478327Z fc1c200a4f77: Pull complete 2025-08-14T21:25:23.2410589Z 43273c22704f: Pull complete 2025-08-14T21:25:23.4413504Z 89df389d042a: Pull complete 2025-08-14T21:25:23.6443575Z 684349f50d94: Pull complete 2025-08-14T21:25:23.8457455Z 21d0eae87fb3: Pull complete 2025-08-14T21:25:24.0654887Z c9c2b424b8e0: Pull complete 2025-08-14T21:25:24.4746294Z 98dda28f3395: Pull complete 2025-08-14T21:25:24.6779409Z acf5babd87f2: Pull complete 2025-08-14T21:25:32.9080808Z 7c5050d8408d: Pull complete 2025-08-14T21:25:33.0691106Z 7ddd14e2b548: Pull complete 2025-08-14T21:25:33.2096329Z 4ba8e7a736c8: Pull complete 2025-08-14T21:25:33.5780175Z 907c320fee2f: Pull complete 2025-08-14T21:25:33.6481136Z 18c4ed1ec491: Pull complete 2025-08-14T21:25:34.0911110Z d7618c2df6cd: Pull complete 2025-08-14T21:25:34.2839720Z b7bdd9a6f789: Pull complete 2025-08-14T21:25:34.4815527Z 6738ba83282e: Pull complete 2025-08-14T21:25:34.8538832Z dfb0f2488639: Pull complete 2025-08-14T21:25:35.0514177Z dc833b0762f2: Pull complete 2025-08-14T21:25:35.2316923Z 8827df8ca2da: Pull complete 2025-08-14T21:26:52.0364305Z fac8f3bd0f85: Download complete 2025-08-14T21:30:00.8826469Z fac8f3bd0f85: Pull complete 2025-08-14T21:30:00.9103337Z d7cf7f140df3: Pull complete 2025-08-14T21:30:02.6764468Z 733eedc8da8d: Pull complete 2025-08-14T21:32:00.0640378Z 5b092eb06909: Pull complete 2025-08-14T21:32:00.2746696Z bc5961031092: Pull complete 2025-08-14T21:32:00.4646202Z 0531cc34c12a: Pull complete 2025-08-14T21:32:00.7822257Z 38c303d3b62e: Pull complete 2025-08-14T21:32:01.0460817Z e06d15594a2a: Pull complete 2025-08-14T21:32:01.2391075Z 0e55deb5cb38: Pull complete 2025-08-14T21:32:01.6124604Z 4a53d66dce07: Pull complete 2025-08-14T21:32:01.9938255Z 1519daa051b8: Pull complete 2025-08-14T21:32:02.1780522Z 381ed91d2119: Pull complete 2025-08-14T21:32:02.5199997Z c6b0a01a96dd: Pull complete 2025-08-14T21:32:02.6741084Z 62df6413daee: Pull complete 2025-08-14T21:32:03.0109852Z 7a18bc2a6881: Pull complete 2025-08-14T21:32:03.1978226Z 93359cd58a8c: Pull complete 2025-08-14T21:32:03.6048279Z c35ba0a1f353: Pull complete 2025-08-14T21:32:03.8166170Z dcf1e01c98d6: Pull complete 2025-08-14T21:32:11.2739879Z bad0564f61fd: Pull complete 2025-08-14T21:32:11.5123523Z 539ded905736: Pull complete 2025-08-14T21:32:11.7465737Z 28d482062637: Pull complete 2025-08-14T21:32:11.9079115Z 3245316ff51b: Pull complete 2025-08-14T21:32:12.0967082Z b53167d1a6df: Pull complete 2025-08-14T21:32:12.2730965Z 7f5277f69167: Pull complete 2025-08-14T21:32:12.6735757Z 23dff10cdaa5: Pull complete 2025-08-14T21:32:14.8365839Z 9fb73296da6a: Pull complete 2025-08-14T21:32:15.0852823Z Digest: sha256:4236794baba289041d240d08fd393bbd57497c3012e5e0ccd9fd98f61ebf35c6 2025-08-14T21:32:15.1066277Z Status: Downloaded newer image for 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:32:15.1165341Z 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:32:15.1260202Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-08-14T21:32:15.1260925Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-08-14T21:32:15.1270209Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:32:15.1270512Z env: 2025-08-14T21:32:15.1270706Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:15.1270917Z ##[endgroup] 2025-08-14T21:32:15.1363894Z Prepare all required actions 2025-08-14T21:32:15.1394964Z ##[group]Run ./.github/actions/get-workflow-job-id 2025-08-14T21:32:15.1395239Z with: 2025-08-14T21:32:15.1395940Z github-token: *** 2025-08-14T21:32:15.1396132Z env: 2025-08-14T21:32:15.1396313Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:15.1396517Z ##[endgroup] 2025-08-14T21:32:15.1429210Z ##[group]Run set -eux 2025-08-14T21:32:15.1429445Z set -eux 2025-08-14T21:32:15.1429783Z python3 .github/scripts/get_workflow_job_id.py "${GITHUB_RUN_ID}" "${RUNNER_NAME}" 2025-08-14T21:32:15.1439725Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:32:15.1440022Z env: 2025-08-14T21:32:15.1440205Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:15.1440599Z GITHUB_TOKEN: *** 2025-08-14T21:32:15.1440802Z ##[endgroup] 2025-08-14T21:32:15.1469887Z + python3 .github/scripts/get_workflow_job_id.py 16976338999 i-0019fc24284416ca3 2025-08-14T21:32:16.7871609Z Setting output job-id=48128301923 2025-08-14T21:32:16.7872575Z Setting output job-name=linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T21:32:16.8004517Z ##[group]Run python3 -m pip install psutil==5.9.8 dataclasses_json==0.6.7 nvidia-ml-py==11.525.84 2025-08-14T21:32:16.8005138Z python3 -m pip install psutil==5.9.8 dataclasses_json==0.6.7 nvidia-ml-py==11.525.84 2025-08-14T21:32:16.8005980Z python3 -m tools.stats.monitor --log-interval "$MONITOR_LOG_INTERVAL" --data-collect-interval "$MONITOR_DATA_COLLECT_INTERVAL" > usage_log.txt 2>&1 & 2025-08-14T21:32:16.8006597Z echo "monitor-script-pid=${!}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:32:16.8012982Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:32:16.8013425Z env: 2025-08-14T21:32:16.8013607Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:16.8013805Z JOB_ID: 48128301923 2025-08-14T21:32:16.8014327Z JOB_NAME: linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T21:32:16.8014888Z WORKFLOW_NAME: inductor-periodic 2025-08-14T21:32:16.8015158Z WORKFLOW_RUN_ID: 16976338999 2025-08-14T21:32:16.8015377Z MONITOR_LOG_INTERVAL: 5 2025-08-14T21:32:16.8015590Z MONITOR_DATA_COLLECT_INTERVAL: 1 2025-08-14T21:32:16.8015810Z ##[endgroup] 2025-08-14T21:32:17.4355148Z Defaulting to user installation because normal site-packages is not writeable 2025-08-14T21:32:17.8194891Z Collecting psutil==5.9.8 2025-08-14T21:32:17.8377732Z Downloading psutil-5.9.8-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (288 kB) 2025-08-14T21:32:17.9726167Z Collecting dataclasses_json==0.6.7 2025-08-14T21:32:17.9766701Z Downloading dataclasses_json-0.6.7-py3-none-any.whl (28 kB) 2025-08-14T21:32:18.0208979Z Collecting nvidia-ml-py==11.525.84 2025-08-14T21:32:18.0251843Z Downloading nvidia_ml_py-11.525.84-py3-none-any.whl (34 kB) 2025-08-14T21:32:18.0859230Z Collecting typing-inspect<1,>=0.4.0 2025-08-14T21:32:18.0899867Z Downloading typing_inspect-0.9.0-py3-none-any.whl (8.8 kB) 2025-08-14T21:32:18.2297273Z Collecting marshmallow<4.0.0,>=3.18.0 2025-08-14T21:32:18.2344574Z Downloading marshmallow-3.26.1-py3-none-any.whl (50 kB) 2025-08-14T21:32:18.3374984Z Collecting packaging>=17.0 2025-08-14T21:32:18.3414542Z Downloading packaging-25.0-py3-none-any.whl (66 kB) 2025-08-14T21:32:18.4055452Z Collecting mypy-extensions>=0.3.0 2025-08-14T21:32:18.4093725Z Downloading mypy_extensions-1.1.0-py3-none-any.whl (5.0 kB) 2025-08-14T21:32:18.4982477Z Collecting typing-extensions>=3.7.4 2025-08-14T21:32:18.5020593Z Downloading typing_extensions-4.14.1-py3-none-any.whl (43 kB) 2025-08-14T21:32:18.7242222Z Installing collected packages: typing-extensions, packaging, mypy-extensions, typing-inspect, marshmallow, psutil, nvidia-ml-py, dataclasses-json 2025-08-14T21:32:19.3323774Z Successfully installed dataclasses-json-0.6.7 marshmallow-3.26.1 mypy-extensions-1.1.0 nvidia-ml-py-11.525.84 packaging-25.0 psutil-5.9.8 typing-extensions-4.14.1 typing-inspect-0.9.0 2025-08-14T21:32:19.5868993Z Prepare all required actions 2025-08-14T21:32:19.5869328Z Getting action download info 2025-08-14T21:32:19.7108285Z Download action repository 'seemethere/download-artifact-s3@v4' (SHA:1da556a7aa0a088e3153970611f6c432d58e80e6) 2025-08-14T21:32:20.2332151Z Download action repository 'actions/download-artifact@v4' (SHA:d3f86a106a0bac45b974a628896c90dbdf5c8093) 2025-08-14T21:32:21.8371011Z ##[group]Run ./.github/actions/download-build-artifacts 2025-08-14T21:32:21.8371299Z with: 2025-08-14T21:32:21.8371508Z name: linux-jammy-py3.9-gcc11-build 2025-08-14T21:32:21.8371756Z s3-bucket: gha-artifacts 2025-08-14T21:32:21.8371965Z env: 2025-08-14T21:32:21.8372139Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:21.8372359Z ##[endgroup] 2025-08-14T21:32:21.8402559Z ##[group]Run seemethere/download-artifact-s3@v4 2025-08-14T21:32:21.8402951Z with: 2025-08-14T21:32:21.8403253Z name: linux-jammy-py3.9-gcc11-build 2025-08-14T21:32:21.8403578Z s3-bucket: gha-artifacts 2025-08-14T21:32:21.8403994Z region: us-east-1 2025-08-14T21:32:21.8404225Z env: 2025-08-14T21:32:21.8404499Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:21.8404806Z ##[endgroup] 2025-08-14T21:32:22.6655826Z (node:49025) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. 2025-08-14T21:32:22.6656332Z 2025-08-14T21:32:22.6656535Z Please migrate your code to use AWS SDK for JavaScript (v3). 2025-08-14T21:32:22.6656977Z For more information, check the migration guide at https://a.co/7PzMCcy 2025-08-14T21:32:22.6657406Z (Use `node --trace-warnings ...` to show where the warning was created) 2025-08-14T21:32:23.5567911Z Found 1 objects with prefix pytorch/pytorch/16976338999/linux-jammy-py3.9-gcc11-build/ 2025-08-14T21:32:23.5568961Z Starting download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2025-08-14T21:32:28.1664132Z Finished download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2025-08-14T21:32:28.1664969Z Artifact download has finished successfully 2025-08-14T21:32:28.1897349Z ##[group]Run unzip -o artifacts.zip 2025-08-14T21:32:28.1897629Z unzip -o artifacts.zip 2025-08-14T21:32:28.1909459Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:32:28.1909762Z env: 2025-08-14T21:32:28.1909957Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:28.1910165Z ##[endgroup] 2025-08-14T21:32:28.2287077Z Archive: artifacts.zip 2025-08-14T21:32:28.2287354Z creating: dist/ 2025-08-14T21:32:29.4801715Z inflating: dist/torch-2.9.0a0+git1fc683c-cp39-cp39-linux_x86_64.whl 2025-08-14T21:32:29.4802234Z creating: dist/vision/ 2025-08-14T21:32:29.4893672Z inflating: dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-08-14T21:32:29.4894202Z creating: dist/audio/ 2025-08-14T21:32:29.5010193Z inflating: dist/audio/torchaudio-2.8.0a0+bdb88e1-cp39-cp39-linux_x86_64.whl 2025-08-14T21:32:29.5010668Z creating: dist/ao/ 2025-08-14T21:32:29.5054949Z inflating: dist/ao/torchao-0.7.0+git51c87b6e-py3-none-any.whl 2025-08-14T21:32:29.5194180Z inflating: dist/.ninja_log 2025-08-14T21:32:29.5194509Z creating: build/custom_test_artifacts/ 2025-08-14T21:32:29.5194822Z creating: build/custom_test_artifacts/custom-op-build/ 2025-08-14T21:32:29.5195206Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/ 2025-08-14T21:32:29.5195643Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/pkgRedirects/ 2025-08-14T21:32:29.5197597Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeConfigureLog.yaml 2025-08-14T21:32:29.5198201Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/ 2025-08-14T21:32:29.5198803Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeSystem.cmake 2025-08-14T21:32:29.5203656Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdC/ 2025-08-14T21:32:29.5204805Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdC/tmp/ 2025-08-14T21:32:29.5205387Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdC/CMakeCCompilerId.c 2025-08-14T21:32:29.5206055Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdC/a.out 2025-08-14T21:32:29.5206631Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeCCompiler.cmake 2025-08-14T21:32:29.5207216Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdCXX/ 2025-08-14T21:32:29.5207777Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdCXX/tmp/ 2025-08-14T21:32:29.5208463Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-08-14T21:32:29.5209093Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdCXX/a.out 2025-08-14T21:32:29.5209772Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeCXXCompiler.cmake 2025-08-14T21:32:29.5210373Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_C.bin 2025-08-14T21:32:29.5211134Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_CXX.bin 2025-08-14T21:32:29.5211828Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeScratch/ 2025-08-14T21:32:29.5212415Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/cmake.check_cache 2025-08-14T21:32:29.5213032Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/ 2025-08-14T21:32:29.5213648Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.ts 2025-08-14T21:32:29.5214569Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.make 2025-08-14T21:32:29.5215161Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/depend.make 2025-08-14T21:32:29.5215840Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/link.txt 2025-08-14T21:32:29.5216490Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/cmake_clean.cmake 2025-08-14T21:32:29.5217176Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/build.make 2025-08-14T21:32:29.5217804Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/DependInfo.cmake 2025-08-14T21:32:29.5218484Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/flags.make 2025-08-14T21:32:29.5219029Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/progress.make 2025-08-14T21:32:29.5237452Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o.d 2025-08-14T21:32:29.5432662Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o 2025-08-14T21:32:29.5433462Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/ 2025-08-14T21:32:29.5434209Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.ts 2025-08-14T21:32:29.5434988Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.make 2025-08-14T21:32:29.5435688Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/depend.make 2025-08-14T21:32:29.5436301Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/link.txt 2025-08-14T21:32:29.5437046Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/cmake_clean.cmake 2025-08-14T21:32:29.5437712Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/build.make 2025-08-14T21:32:29.5438699Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/DependInfo.cmake 2025-08-14T21:32:29.5439295Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/flags.make 2025-08-14T21:32:29.5439878Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/progress.make 2025-08-14T21:32:29.5467020Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o.d 2025-08-14T21:32:29.5552951Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o 2025-08-14T21:32:29.5553937Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-08-14T21:32:29.5554657Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/TargetDirectories.txt 2025-08-14T21:32:29.5555253Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/progress.marks 2025-08-14T21:32:29.5555777Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile2 2025-08-14T21:32:29.5556442Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile.cmake 2025-08-14T21:32:29.5557071Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/InstallScripts.json 2025-08-14T21:32:29.5558353Z inflating: build/custom_test_artifacts/custom-op-build/CMakeCache.txt 2025-08-14T21:32:29.5559119Z inflating: build/custom_test_artifacts/custom-op-build/Makefile 2025-08-14T21:32:29.5559960Z inflating: build/custom_test_artifacts/custom-op-build/cmake_install.cmake 2025-08-14T21:32:29.5745293Z inflating: build/custom_test_artifacts/custom-op-build/libcustom_ops.so 2025-08-14T21:32:29.5803033Z inflating: build/custom_test_artifacts/custom-op-build/test_custom_ops 2025-08-14T21:32:29.5803536Z creating: build/custom_test_artifacts/jit-hook-build/ 2025-08-14T21:32:29.5804001Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/ 2025-08-14T21:32:29.5804545Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/pkgRedirects/ 2025-08-14T21:32:29.5805549Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeConfigureLog.yaml 2025-08-14T21:32:29.5806037Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/ 2025-08-14T21:32:29.5806527Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeSystem.cmake 2025-08-14T21:32:29.5807046Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdC/ 2025-08-14T21:32:29.5807553Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdC/tmp/ 2025-08-14T21:32:29.5814613Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdC/CMakeCCompilerId.c 2025-08-14T21:32:29.5815873Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdC/a.out 2025-08-14T21:32:29.5816480Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeCCompiler.cmake 2025-08-14T21:32:29.5817011Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdCXX/ 2025-08-14T21:32:29.5817512Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdCXX/tmp/ 2025-08-14T21:32:29.5819227Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-08-14T21:32:29.5820373Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdCXX/a.out 2025-08-14T21:32:29.5821203Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeCXXCompiler.cmake 2025-08-14T21:32:29.5822521Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_C.bin 2025-08-14T21:32:29.5824161Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_CXX.bin 2025-08-14T21:32:29.5824712Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeScratch/ 2025-08-14T21:32:29.5825373Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/cmake.check_cache 2025-08-14T21:32:29.5825858Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/ 2025-08-14T21:32:29.5826414Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.ts 2025-08-14T21:32:29.5827030Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.make 2025-08-14T21:32:29.5827626Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/depend.make 2025-08-14T21:32:29.5828176Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/link.txt 2025-08-14T21:32:29.5828757Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/cmake_clean.cmake 2025-08-14T21:32:29.5829342Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/build.make 2025-08-14T21:32:29.5829938Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/DependInfo.cmake 2025-08-14T21:32:29.5830516Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/flags.make 2025-08-14T21:32:29.5831073Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/progress.make 2025-08-14T21:32:29.5856437Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o.d 2025-08-14T21:32:29.5924637Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o 2025-08-14T21:32:29.5933458Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-08-14T21:32:29.5934396Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/TargetDirectories.txt 2025-08-14T21:32:29.5935064Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/progress.marks 2025-08-14T21:32:29.5935696Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile2 2025-08-14T21:32:29.5936264Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile.cmake 2025-08-14T21:32:29.5936744Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/InstallScripts.json 2025-08-14T21:32:29.5937213Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeCache.txt 2025-08-14T21:32:29.5937610Z inflating: build/custom_test_artifacts/jit-hook-build/Makefile 2025-08-14T21:32:29.5938014Z inflating: build/custom_test_artifacts/jit-hook-build/cmake_install.cmake 2025-08-14T21:32:29.5975424Z inflating: build/custom_test_artifacts/jit-hook-build/test_jit_hooks 2025-08-14T21:32:29.5975947Z creating: build/custom_test_artifacts/custom-backend-build/ 2025-08-14T21:32:29.5976453Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/ 2025-08-14T21:32:29.5976940Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/pkgRedirects/ 2025-08-14T21:32:29.5979333Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeConfigureLog.yaml 2025-08-14T21:32:29.5979847Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/ 2025-08-14T21:32:29.5980356Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeSystem.cmake 2025-08-14T21:32:29.5980901Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdC/ 2025-08-14T21:32:29.5981434Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdC/tmp/ 2025-08-14T21:32:29.5982904Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdC/CMakeCCompilerId.c 2025-08-14T21:32:29.5984178Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdC/a.out 2025-08-14T21:32:29.5985212Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeCCompiler.cmake 2025-08-14T21:32:29.5985864Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdCXX/ 2025-08-14T21:32:29.5986407Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdCXX/tmp/ 2025-08-14T21:32:29.5987538Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-08-14T21:32:29.5989270Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdCXX/a.out 2025-08-14T21:32:29.5990119Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeCXXCompiler.cmake 2025-08-14T21:32:29.5991523Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_C.bin 2025-08-14T21:32:29.5993028Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_CXX.bin 2025-08-14T21:32:29.5993772Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeScratch/ 2025-08-14T21:32:29.5994440Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/cmake.check_cache 2025-08-14T21:32:29.5995113Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/ 2025-08-14T21:32:29.5995829Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.ts 2025-08-14T21:32:29.5996601Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.make 2025-08-14T21:32:29.6001730Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/depend.make 2025-08-14T21:32:29.6002561Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/link.txt 2025-08-14T21:32:29.6003317Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/cmake_clean.cmake 2025-08-14T21:32:29.6003954Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/build.make 2025-08-14T21:32:29.6004578Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/DependInfo.cmake 2025-08-14T21:32:29.6005197Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/flags.make 2025-08-14T21:32:29.6005803Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/progress.make 2025-08-14T21:32:29.6007215Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o.d 2025-08-14T21:32:29.6138691Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o 2025-08-14T21:32:29.6139645Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/ 2025-08-14T21:32:29.6140407Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.ts 2025-08-14T21:32:29.6141222Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.make 2025-08-14T21:32:29.6142154Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/depend.make 2025-08-14T21:32:29.6142883Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/link.txt 2025-08-14T21:32:29.6143675Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/cmake_clean.cmake 2025-08-14T21:32:29.6144329Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/build.make 2025-08-14T21:32:29.6144987Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/DependInfo.cmake 2025-08-14T21:32:29.6145826Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/flags.make 2025-08-14T21:32:29.6146476Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/progress.make 2025-08-14T21:32:29.6168077Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o.d 2025-08-14T21:32:29.6226990Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o 2025-08-14T21:32:29.6228146Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-08-14T21:32:29.6233072Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/TargetDirectories.txt 2025-08-14T21:32:29.6233627Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/progress.marks 2025-08-14T21:32:29.6234358Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile2 2025-08-14T21:32:29.6235039Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile.cmake 2025-08-14T21:32:29.6235626Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/InstallScripts.json 2025-08-14T21:32:29.6236125Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeCache.txt 2025-08-14T21:32:29.6236570Z inflating: build/custom_test_artifacts/custom-backend-build/Makefile 2025-08-14T21:32:29.6237015Z inflating: build/custom_test_artifacts/custom-backend-build/cmake_install.cmake 2025-08-14T21:32:29.6345437Z inflating: build/custom_test_artifacts/custom-backend-build/libcustom_backend.so 2025-08-14T21:32:29.6387898Z inflating: build/custom_test_artifacts/custom-backend-build/test_custom_backend 2025-08-14T21:32:29.6393207Z creating: build/lib/ 2025-08-14T21:32:29.6470667Z inflating: build/lib/libprotobuf-lite.a 2025-08-14T21:32:29.6953013Z inflating: build/lib/libprotobuf.a 2025-08-14T21:32:29.7497238Z inflating: build/lib/libprotoc.a 2025-08-14T21:32:29.7509610Z inflating: build/lib/libpthreadpool.a 2025-08-14T21:32:29.7515140Z inflating: build/lib/libcpuinfo.a 2025-08-14T21:32:29.7527068Z inflating: build/lib/libcpuinfo_internals.a 2025-08-14T21:32:29.7527815Z inflating: build/lib/libclog.a 2025-08-14T21:32:29.7546643Z inflating: build/lib/libpytorch_qnnpack.a 2025-08-14T21:32:29.7555000Z inflating: build/lib/libnnpack_reference_layers.a 2025-08-14T21:32:29.7753683Z inflating: build/lib/libmicrokernels-prod.a 2025-08-14T21:32:29.7770843Z inflating: build/lib/libnnpack.a 2025-08-14T21:32:29.8728243Z inflating: build/lib/libmicrokernels-all.a 2025-08-14T21:32:29.8804036Z inflating: build/lib/libgtest.a 2025-08-14T21:32:29.8823207Z inflating: build/lib/libgmock.a 2025-08-14T21:32:29.8823921Z inflating: build/lib/libgmock_main.a 2025-08-14T21:32:29.8828646Z inflating: build/lib/libgtest_main.a 2025-08-14T21:32:29.8921791Z inflating: build/lib/libXNNPACK.a 2025-08-14T21:32:29.9007286Z inflating: build/lib/libbenchmark.a 2025-08-14T21:32:29.9007991Z inflating: build/lib/libbenchmark_main.a 2025-08-14T21:32:29.9008731Z inflating: build/lib/libjitprofiling.a 2025-08-14T21:32:29.9079053Z inflating: build/lib/libasmjit.a 2025-08-14T21:32:29.9092166Z inflating: build/lib/libittnotify.a 2025-08-14T21:32:30.0340926Z inflating: build/lib/libfbgemm.a 2025-08-14T21:32:30.0366987Z inflating: build/lib/libtensorpipe_uv.a 2025-08-14T21:32:30.0975448Z inflating: build/lib/libtensorpipe.a 2025-08-14T21:32:30.1110097Z inflating: build/lib/libgloo.a 2025-08-14T21:32:30.1158926Z inflating: build/lib/libonnx_proto.a 2025-08-14T21:32:30.1920892Z inflating: build/lib/libonnx.a 2025-08-14T21:32:31.2810092Z inflating: build/lib/libdnnl.a 2025-08-14T21:32:31.2836145Z inflating: build/lib/libfmt.a 2025-08-14T21:32:31.3119351Z inflating: build/lib/libkineto.a 2025-08-14T21:32:31.3237180Z inflating: build/lib/libc10.so 2025-08-14T21:32:31.3247035Z inflating: build/lib/libtorch_global_deps.so 2025-08-14T21:32:34.7251475Z inflating: build/lib/libtorch_cpu.so 2025-08-14T21:32:34.7252471Z inflating: build/lib/libtorch.so 2025-08-14T21:32:34.7336336Z inflating: build/lib/libtorchbind_test.so 2025-08-14T21:32:34.7354731Z inflating: build/lib/libjitbackend_test.so 2025-08-14T21:32:34.7384467Z inflating: build/lib/libbackend_with_compiler.so 2025-08-14T21:32:34.7413834Z inflating: build/lib/libaoti_custom_ops.so 2025-08-14T21:32:34.7421720Z inflating: build/lib/libshm.so 2025-08-14T21:32:34.9697191Z inflating: build/lib/libtorch_python.so 2025-08-14T21:32:34.9740349Z inflating: build/lib/libnnapi_backend.so 2025-08-14T21:32:34.9740793Z creating: build/bin/ 2025-08-14T21:32:34.9741051Z creating: build/bin/CMakeFiles/ 2025-08-14T21:32:34.9741474Z inflating: build/bin/cmake_install.cmake 2025-08-14T21:32:34.9742065Z inflating: build/bin/CTestTestfile.cmake 2025-08-14T21:32:35.0264331Z inflating: build/bin/protoc-3.13.0.0 2025-08-14T21:32:35.0796410Z inflating: build/bin/protoc 2025-08-14T21:32:35.0861935Z inflating: build/bin/c10_AllocatorConfig_test 2025-08-14T21:32:35.0927785Z inflating: build/bin/c10_CompileTimeFunctionPointer_test 2025-08-14T21:32:35.0993675Z inflating: build/bin/c10_DeviceGuard_test 2025-08-14T21:32:35.1057704Z inflating: build/bin/c10_Device_test 2025-08-14T21:32:35.1125015Z inflating: build/bin/c10_StreamGuard_test 2025-08-14T21:32:35.1196248Z inflating: build/bin/c10_DispatchKeySet_test 2025-08-14T21:32:35.1265654Z inflating: build/bin/c10_SymInt_test 2025-08-14T21:32:35.1335832Z inflating: build/bin/c10_Scalar_test 2025-08-14T21:32:35.1408528Z inflating: build/bin/c10_InlineDeviceGuard_test 2025-08-14T21:32:35.1476286Z inflating: build/bin/c10_InlineStreamGuard_test 2025-08-14T21:32:35.1555572Z inflating: build/bin/c10_SizesAndStrides_test 2025-08-14T21:32:35.1614974Z inflating: build/bin/c10_Bitset_test 2025-08-14T21:32:35.1707389Z inflating: build/bin/c10_cow_test 2025-08-14T21:32:35.1769123Z inflating: build/bin/c10_ArrayRef_test 2025-08-14T21:32:35.1833768Z inflating: build/bin/c10_ConstexprCrc_test 2025-08-14T21:32:35.1897811Z inflating: build/bin/c10_DeadlockDetection_test 2025-08-14T21:32:35.1966502Z inflating: build/bin/c10_Enumerate_test 2025-08-14T21:32:35.2035369Z inflating: build/bin/c10_Half_test 2025-08-14T21:32:35.2104313Z inflating: build/bin/c10_IntrusiveList_test 2025-08-14T21:32:35.2178299Z inflating: build/bin/c10_LeftRight_test 2025-08-14T21:32:35.2245489Z inflating: build/bin/c10_Metaprogramming_test 2025-08-14T21:32:35.2317033Z inflating: build/bin/c10_NetworkFlow_test 2025-08-14T21:32:35.2379032Z inflating: build/bin/c10_Synchronized_test 2025-08-14T21:32:35.2450034Z inflating: build/bin/c10_Semaphore_test 2025-08-14T21:32:35.2510203Z inflating: build/bin/c10_TypeIndex_test 2025-08-14T21:32:35.2585932Z inflating: build/bin/c10_ThreadLocal_test 2025-08-14T21:32:35.2646797Z inflating: build/bin/c10_TypeList_test 2025-08-14T21:32:35.2715936Z inflating: build/bin/c10_TypeTraits_test 2025-08-14T21:32:35.2777305Z inflating: build/bin/c10_accumulate_test 2025-08-14T21:32:35.2849365Z inflating: build/bin/c10_bfloat16_test 2025-08-14T21:32:35.2922306Z inflating: build/bin/c10_complex_test 2025-08-14T21:32:35.2991448Z inflating: build/bin/c10_complex_math_test 2025-08-14T21:32:35.3060544Z inflating: build/bin/c10_bit_cast_test 2025-08-14T21:32:35.3122332Z inflating: build/bin/c10_error_test 2025-08-14T21:32:35.3194146Z inflating: build/bin/c10_exception_test 2025-08-14T21:32:35.3255693Z inflating: build/bin/c10_flags_test 2025-08-14T21:32:35.3326547Z inflating: build/bin/c10_irange_test 2025-08-14T21:32:35.3386786Z inflating: build/bin/c10_generic_math_test 2025-08-14T21:32:35.3586659Z inflating: build/bin/c10_intrusive_ptr_test 2025-08-14T21:32:35.3659069Z inflating: build/bin/c10_lazy_test 2025-08-14T21:32:35.3728266Z inflating: build/bin/c10_logging_test 2025-08-14T21:32:35.3805885Z inflating: build/bin/c10_ordered_preserving_dict_test 2025-08-14T21:32:35.3903616Z inflating: build/bin/c10_optional_test 2025-08-14T21:32:35.3975413Z inflating: build/bin/c10_registry_test 2025-08-14T21:32:35.4161813Z inflating: build/bin/c10_small_vector_test 2025-08-14T21:32:35.4234424Z inflating: build/bin/c10_string_util_test 2025-08-14T21:32:35.4300451Z inflating: build/bin/c10_ssize_test 2025-08-14T21:32:35.4361727Z inflating: build/bin/c10_string_view_test 2025-08-14T21:32:35.4429794Z inflating: build/bin/c10_tempfile_test 2025-08-14T21:32:35.4497768Z inflating: build/bin/c10_typeid_test 2025-08-14T21:32:35.4560159Z inflating: build/bin/c10_intrusive_ptr_benchmark 2025-08-14T21:32:35.5253197Z inflating: build/bin/vec_test_all_types_DEFAULT 2025-08-14T21:32:35.5959715Z inflating: build/bin/vec_test_all_types_AVX512 2025-08-14T21:32:35.6677195Z inflating: build/bin/vec_test_all_types_AVX2 2025-08-14T21:32:35.6742975Z inflating: build/bin/static_runtime_bench 2025-08-14T21:32:35.7046125Z inflating: build/bin/static_runtime_test 2025-08-14T21:32:35.7138448Z inflating: build/bin/Dict_test 2025-08-14T21:32:35.7211586Z inflating: build/bin/Dimname_test 2025-08-14T21:32:35.7290079Z inflating: build/bin/MaybeOwned_test 2025-08-14T21:32:35.7359042Z inflating: build/bin/NamedTensor_test 2025-08-14T21:32:35.7439523Z inflating: build/bin/apply_utils_test 2025-08-14T21:32:35.7514265Z inflating: build/bin/atest 2025-08-14T21:32:35.7592133Z inflating: build/bin/basic 2025-08-14T21:32:35.7670157Z inflating: build/bin/broadcast_test 2025-08-14T21:32:35.7732707Z inflating: build/bin/cpu_allocator_test 2025-08-14T21:32:35.7802899Z inflating: build/bin/cpu_generator_test 2025-08-14T21:32:35.7874051Z inflating: build/bin/cpu_profiling_allocator_test 2025-08-14T21:32:35.7991648Z inflating: build/bin/cpu_rng_test 2025-08-14T21:32:35.8055501Z inflating: build/bin/dlconvertor_test 2025-08-14T21:32:35.8130694Z inflating: build/bin/extension_backend_test 2025-08-14T21:32:35.8198133Z inflating: build/bin/half_test 2025-08-14T21:32:35.8320022Z inflating: build/bin/ivalue_test 2025-08-14T21:32:35.8381655Z inflating: build/bin/lazy_tensor_test 2025-08-14T21:32:35.8453407Z inflating: build/bin/math_kernel_test 2025-08-14T21:32:35.8518711Z inflating: build/bin/memory_format_test 2025-08-14T21:32:35.8590837Z inflating: build/bin/memory_overlapping_test 2025-08-14T21:32:35.8660158Z inflating: build/bin/mobile_memory_cleanup 2025-08-14T21:32:35.8734181Z inflating: build/bin/native_test 2025-08-14T21:32:35.8796783Z inflating: build/bin/operator_name_test 2025-08-14T21:32:35.8863582Z inflating: build/bin/operators_test 2025-08-14T21:32:35.8929670Z inflating: build/bin/packedtensoraccessor_test 2025-08-14T21:32:35.9020830Z inflating: build/bin/pow_test 2025-08-14T21:32:35.9088078Z inflating: build/bin/quantized_test 2025-08-14T21:32:35.9150384Z inflating: build/bin/reduce_ops_test 2025-08-14T21:32:35.9219346Z inflating: build/bin/reportMemoryUsage_test 2025-08-14T21:32:35.9287537Z inflating: build/bin/scalar_tensor_test 2025-08-14T21:32:35.9368716Z inflating: build/bin/scalar_test 2025-08-14T21:32:35.9431651Z inflating: build/bin/StorageUtils_test 2025-08-14T21:32:35.9501741Z inflating: build/bin/stride_properties_test 2025-08-14T21:32:35.9596689Z inflating: build/bin/tensor_iterator_test 2025-08-14T21:32:35.9668921Z inflating: build/bin/test_parallel 2025-08-14T21:32:35.9731011Z inflating: build/bin/thread_init_test 2025-08-14T21:32:35.9804331Z inflating: build/bin/type_ptr_test 2025-08-14T21:32:35.9879483Z inflating: build/bin/type_test 2025-08-14T21:32:35.9950139Z inflating: build/bin/undefined_tensor_test 2025-08-14T21:32:36.0011124Z inflating: build/bin/verify_api_visibility 2025-08-14T21:32:36.0099310Z inflating: build/bin/legacy_vmap_test 2025-08-14T21:32:36.0168293Z inflating: build/bin/weakref_test 2025-08-14T21:32:36.0229072Z inflating: build/bin/wrapdim_test 2025-08-14T21:32:36.0298868Z inflating: build/bin/xla_tensor_test 2025-08-14T21:32:36.0376149Z inflating: build/bin/IListRef_test 2025-08-14T21:32:36.0500610Z inflating: build/bin/List_test 2025-08-14T21:32:36.0588696Z inflating: build/bin/KernelFunction_test 2025-08-14T21:32:36.0733401Z inflating: build/bin/kernel_function_legacy_test 2025-08-14T21:32:36.0854330Z inflating: build/bin/kernel_function_test 2025-08-14T21:32:36.1008476Z inflating: build/bin/kernel_lambda_legacy_test 2025-08-14T21:32:36.1128742Z inflating: build/bin/kernel_lambda_test 2025-08-14T21:32:36.1207862Z inflating: build/bin/kernel_stackbased_test 2025-08-14T21:32:36.1328433Z inflating: build/bin/make_boxed_from_unboxed_functor_test 2025-08-14T21:32:36.1391458Z inflating: build/bin/CppSignature_test 2025-08-14T21:32:36.1465255Z inflating: build/bin/backend_fallback_test 2025-08-14T21:32:36.1525942Z inflating: build/bin/op_allowlist_test 2025-08-14T21:32:36.1894831Z inflating: build/bin/op_registration_test 2025-08-14T21:32:36.1977638Z inflating: build/bin/inline_container_test 2025-08-14T21:32:36.3274379Z inflating: build/bin/test_jit 2025-08-14T21:32:36.3662323Z inflating: build/bin/test_nativert 2025-08-14T21:32:36.3721799Z inflating: build/bin/BackoffTest 2025-08-14T21:32:36.3795752Z inflating: build/bin/FileStoreTest 2025-08-14T21:32:36.3866958Z inflating: build/bin/TCPStoreTest 2025-08-14T21:32:36.3934227Z inflating: build/bin/HashStoreTest 2025-08-14T21:32:36.4021640Z inflating: build/bin/ProcessGroupGlooTest 2025-08-14T21:32:36.4024340Z inflating: build/bin/example_allreduce 2025-08-14T21:32:36.4095760Z inflating: build/bin/test_dist_autograd 2025-08-14T21:32:36.4184885Z inflating: build/bin/test_cpp_rpc 2025-08-14T21:32:36.5516435Z inflating: build/bin/test_api 2025-08-14T21:32:36.5518907Z inflating: build/bin/parallel_benchmark 2025-08-14T21:32:36.5929998Z inflating: build/bin/test_lazy 2025-08-14T21:32:36.5934170Z inflating: build/bin/torch_shm_manager 2025-08-14T21:32:36.5934496Z creating: .additional_ci_files/ 2025-08-14T21:32:36.6024220Z inflating: .additional_ci_files/test-times.json 2025-08-14T21:32:36.6379559Z inflating: .additional_ci_files/test-class-times.json 2025-08-14T21:32:36.6482914Z ##[group]Run rm artifacts.zip 2025-08-14T21:32:36.6483176Z rm artifacts.zip 2025-08-14T21:32:36.6488771Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:32:36.6489073Z env: 2025-08-14T21:32:36.6489257Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:36.6489485Z ##[endgroup] 2025-08-14T21:32:36.6996677Z ##[group]Run df -H 2025-08-14T21:32:36.6996900Z df -H 2025-08-14T21:32:36.7002807Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:32:36.7003113Z env: 2025-08-14T21:32:36.7007531Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:36.7007752Z ##[endgroup] 2025-08-14T21:32:36.7056854Z Filesystem Size Used Avail Use% Mounted on 2025-08-14T21:32:36.7057218Z devtmpfs 4.2M 0 4.2M 0% /dev 2025-08-14T21:32:36.7057533Z tmpfs 85G 0 85G 0% /dev/shm 2025-08-14T21:32:36.7057847Z tmpfs 34G 648k 34G 1% /run 2025-08-14T21:32:36.7058098Z /dev/xvda1 215G 70G 146G 33% / 2025-08-14T21:32:36.7058357Z tmpfs 85G 13k 85G 1% /tmp 2025-08-14T21:32:36.7059000Z /dev/xvda128 11M 1.4M 9.2M 13% /boot/efi 2025-08-14T21:32:36.7094857Z Prepare all required actions 2025-08-14T21:32:36.7095660Z Getting action download info 2025-08-14T21:32:36.8451256Z ##[group]Run ./.github/actions/download-td-artifacts 2025-08-14T21:32:36.8451545Z with: 2025-08-14T21:32:36.8451715Z env: 2025-08-14T21:32:36.8451886Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:36.8452092Z ##[endgroup] 2025-08-14T21:32:36.8534171Z ##[group]Run seemethere/download-artifact-s3@v4 2025-08-14T21:32:36.8534448Z with: 2025-08-14T21:32:36.8534625Z name: td_results 2025-08-14T21:32:36.8534829Z s3-bucket: gha-artifacts 2025-08-14T21:32:36.8535033Z region: us-east-1 2025-08-14T21:32:36.8535220Z env: 2025-08-14T21:32:36.8535393Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:36.8535597Z ##[endgroup] 2025-08-14T21:32:37.3635146Z (node:49047) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. 2025-08-14T21:32:37.3635540Z 2025-08-14T21:32:37.3635752Z Please migrate your code to use AWS SDK for JavaScript (v3). 2025-08-14T21:32:37.3636240Z For more information, check the migration guide at https://a.co/7PzMCcy 2025-08-14T21:32:37.3636688Z (Use `node --trace-warnings ...` to show where the warning was created) 2025-08-14T21:32:37.4525871Z Found 0 objects with prefix pytorch/pytorch/16976338999/td_results/ 2025-08-14T21:32:37.4533540Z Artifact download has finished successfully 2025-08-14T21:32:37.7309044Z ##[group]Run mkdir -p .additional_ci_files 2025-08-14T21:32:37.7309354Z mkdir -p .additional_ci_files 2025-08-14T21:32:37.7309686Z mv td_results.json .additional_ci_files/td_results.json || true 2025-08-14T21:32:37.7320035Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:32:37.7320341Z env: 2025-08-14T21:32:37.7320530Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:37.7320745Z ##[endgroup] 2025-08-14T21:32:37.7380602Z mv: cannot stat 'td_results.json': No such file or directory 2025-08-14T21:32:37.7405393Z ##[group]Run .github/scripts/parse_ref.py 2025-08-14T21:32:37.7405701Z .github/scripts/parse_ref.py 2025-08-14T21:32:37.7410839Z shell: /usr/bin/bash -e {0} 2025-08-14T21:32:37.7411183Z env: 2025-08-14T21:32:37.7411368Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:37.7411573Z ##[endgroup] 2025-08-14T21:32:37.7889398Z Setting output branch=main 2025-08-14T21:32:37.8008361Z Prepare all required actions 2025-08-14T21:32:37.8008751Z Getting action download info 2025-08-14T21:32:37.9834783Z ##[group]Run ./.github/actions/filter-test-configs 2025-08-14T21:32:37.9835069Z with: 2025-08-14T21:32:37.9835525Z github-token: *** 2025-08-14T21:32:37.9838359Z test-matrix: {"include": [{"config": "cpu_inductor_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_avx2_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_timm", "shard": 1, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_timm", "shard": 2, "num_shards": 2, "runner": "linux.10xlarge.avx2"}]} 2025-08-14T21:32:37.9841653Z job-name: linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T21:32:37.9842195Z env: 2025-08-14T21:32:37.9842369Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:37.9842583Z ##[endgroup] 2025-08-14T21:32:37.9903308Z ##[group]Run nick-fields/retry@v3.0.0 2025-08-14T21:32:37.9903577Z with: 2025-08-14T21:32:37.9903756Z shell: bash 2025-08-14T21:32:37.9903944Z timeout_minutes: 10 2025-08-14T21:32:37.9904140Z max_attempts: 5 2025-08-14T21:32:37.9904350Z retry_wait_seconds: 30 2025-08-14T21:32:37.9904939Z command: set -eux # PyYAML 6.0 doesn't work with MacOS x86 anymore # This must run on Python-3.7 (AmazonLinux2) so can't use request=3.32.2 python3 -m pip install requests==2.27.1 pyyaml==6.0.2 2025-08-14T21:32:37.9905547Z polling_interval_seconds: 1 2025-08-14T21:32:37.9905765Z warning_on_retry: true 2025-08-14T21:32:37.9905974Z continue_on_error: false 2025-08-14T21:32:37.9906187Z env: 2025-08-14T21:32:37.9906354Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:37.9906763Z GITHUB_TOKEN: *** 2025-08-14T21:32:37.9906957Z ##[endgroup] 2025-08-14T21:32:38.1677148Z + python3 -m pip install requests==2.27.1 pyyaml==6.0.2 2025-08-14T21:32:38.4310270Z Defaulting to user installation because normal site-packages is not writeable 2025-08-14T21:32:38.5832106Z Collecting requests==2.27.1 2025-08-14T21:32:38.6011133Z Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB) 2025-08-14T21:32:38.8253159Z Collecting pyyaml==6.0.2 2025-08-14T21:32:38.8288825Z Downloading PyYAML-6.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (737 kB) 2025-08-14T21:32:39.3169420Z Collecting charset-normalizer~=2.0.0 2025-08-14T21:32:39.3213745Z Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB) 2025-08-14T21:32:39.4141384Z Collecting certifi>=2017.4.17 2025-08-14T21:32:39.4188614Z Downloading certifi-2025.8.3-py3-none-any.whl (161 kB) 2025-08-14T21:32:39.4577654Z Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3.9/site-packages (from requests==2.27.1) (2.10) 2025-08-14T21:32:39.4580829Z Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3.9/site-packages (from requests==2.27.1) (1.25.10) 2025-08-14T21:32:39.5482117Z Installing collected packages: charset-normalizer, certifi, requests, pyyaml 2025-08-14T21:32:39.8713243Z Successfully installed certifi-2025.8.3 charset-normalizer-2.0.12 pyyaml-6.0.2 requests-2.27.1 2025-08-14T21:32:40.0781212Z Command completed after 1 attempt(s). 2025-08-14T21:32:40.0841586Z ##[group]Run set -x 2025-08-14T21:32:40.0841809Z set -x 2025-08-14T21:32:40.0841992Z  2025-08-14T21:32:40.0842285Z # Use relative path here as this could be checked out anywhere, not necessarily 2025-08-14T21:32:40.0842654Z # in runner workspace 2025-08-14T21:32:40.0842956Z python3 "${GITHUB_ACTION_PATH}/../../scripts/parse_ref.py" 2025-08-14T21:32:40.0853218Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:32:40.0853511Z env: 2025-08-14T21:32:40.0853698Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:40.0853912Z ##[endgroup] 2025-08-14T21:32:40.0881447Z + python3 /home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/filter-test-configs/../../scripts/parse_ref.py 2025-08-14T21:32:40.1085276Z Setting output branch=main 2025-08-14T21:32:40.1133991Z ##[group]Run echo "Workflow: ${GITHUB_WORKFLOW}" 2025-08-14T21:32:40.1134326Z echo "Workflow: ${GITHUB_WORKFLOW}" 2025-08-14T21:32:40.1134606Z echo "Job name: ${JOB_NAME}" 2025-08-14T21:32:40.1134832Z  2025-08-14T21:32:40.1135126Z # Use relative path here as this could be checked out anywhere, not necessarily 2025-08-14T21:32:40.1135485Z # in runner workspace 2025-08-14T21:32:40.1135818Z python3 "${GITHUB_ACTION_PATH}/../../scripts/filter_test_configs.py" \ 2025-08-14T21:32:40.1136187Z  --workflow "${GITHUB_WORKFLOW}" \ 2025-08-14T21:32:40.1136528Z  --job-name "${JOB_NAME}" \ 2025-08-14T21:32:40.1139297Z  --test-matrix "{"include": [{"config": "cpu_inductor_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_avx2_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_timm", "shard": 1, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_timm", "shard": 2, "num_shards": 2, "runner": "linux.10xlarge.avx2"}]}" \ 2025-08-14T21:32:40.1142128Z  --selected-test-configs "" \ 2025-08-14T21:32:40.1142396Z  --pr-number "${PR_NUMBER}" \ 2025-08-14T21:32:40.1142653Z  --tag "${TAG}" \ 2025-08-14T21:32:40.1142873Z  --event-name "${EVENT_NAME}" \ 2025-08-14T21:32:40.1143131Z  --schedule "${SCHEDULE}" \ 2025-08-14T21:32:40.1143372Z  --branch "${HEAD_BRANCH}" 2025-08-14T21:32:40.1149211Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:32:40.1149508Z env: 2025-08-14T21:32:40.1149693Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:40.1150194Z GITHUB_TOKEN: *** 2025-08-14T21:32:40.1150700Z JOB_NAME: linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T21:32:40.1155443Z PR_NUMBER: 2025-08-14T21:32:40.1155624Z TAG: 2025-08-14T21:32:40.1155786Z EVENT_NAME: schedule 2025-08-14T21:32:40.1155997Z SCHEDULE: 45 0,4,8,12,16,20 * * 1-5 2025-08-14T21:32:40.1156232Z HEAD_BRANCH: main 2025-08-14T21:32:40.1156414Z ##[endgroup] 2025-08-14T21:32:40.1184764Z Workflow: inductor-periodic 2025-08-14T21:32:40.1185627Z Job name: linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T21:32:40.3056935Z Setting output keep-going=True 2025-08-14T21:32:40.3057278Z Setting output ci-verbose-test-logs=False 2025-08-14T21:32:40.3057587Z Setting output ci-test-showlocals=False 2025-08-14T21:32:40.3057852Z Setting output ci-no-test-timeout=False 2025-08-14T21:32:40.3058106Z Setting output ci-no-td=False 2025-08-14T21:32:40.3058343Z Setting output ci-td-distributed=False 2025-08-14T21:32:40.3058599Z Setting output is-unstable=False 2025-08-14T21:32:40.3058842Z Setting output reenabled-issues= 2025-08-14T21:32:40.3061569Z Setting output test-matrix={"include": [{"config": "cpu_inductor_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_avx2_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_timm", "shard": 1, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_timm", "shard": 2, "num_shards": 2, "runner": "linux.10xlarge.avx2"}]} 2025-08-14T21:32:40.3064471Z Setting output is-test-matrix-empty=False 2025-08-14T21:32:40.3227720Z ##[group]Run echo "Filtered matrix:" 2025-08-14T21:32:40.3228052Z echo "Filtered matrix:" 2025-08-14T21:32:40.3230787Z echo "{"include": [{"config": "cpu_inductor_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_inductor_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_avx2_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_timm", "shard": 1, "num_shards": 2, "runner": "linux.10xlarge.avx2"}, {"config": "cpu_inductor_freezing_avx2_timm", "shard": 2, "num_shards": 2, "runner": "linux.10xlarge.avx2"}]}" 2025-08-14T21:32:40.3233588Z  2025-08-14T21:32:40.3233763Z echo 2025-08-14T21:32:40.3233986Z echo "Is the current job unstable? False" 2025-08-14T21:32:40.3234240Z  2025-08-14T21:32:40.3234407Z echo 2025-08-14T21:32:40.3234618Z echo "Is keep-going label set? True" 2025-08-14T21:32:40.3234855Z  2025-08-14T21:32:40.3235021Z echo 2025-08-14T21:32:40.3235208Z echo "Reenabled issues? " 2025-08-14T21:32:40.3245065Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:32:40.3245372Z env: 2025-08-14T21:32:40.3245568Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:40.3245775Z ##[endgroup] 2025-08-14T21:32:40.3272791Z Filtered matrix: 2025-08-14T21:32:40.3275714Z {include: [{config: cpu_inductor_huggingface, shard: 1, num_shards: 1, runner: linux.8xlarge.amx}, {config: cpu_inductor_timm, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_inductor_timm, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}, {config: dynamic_cpu_inductor_huggingface, shard: 1, num_shards: 1, runner: linux.8xlarge.amx}, {config: dynamic_cpu_inductor_timm, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: dynamic_cpu_inductor_timm, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_inductor_freezing_avx2_huggingface, shard: 1, num_shards: 1, runner: linux.10xlarge.avx2}, {config: cpu_inductor_freezing_avx2_torchbench, shard: 1, num_shards: 2, runner: linux.10xlarge.avx2}, {config: cpu_inductor_freezing_avx2_torchbench, shard: 2, num_shards: 2, runner: linux.10xlarge.avx2}, {config: cpu_inductor_freezing_avx2_timm, shard: 1, num_shards: 2, runner: linux.10xlarge.avx2}, {config: cpu_inductor_freezing_avx2_timm, shard: 2, num_shards: 2, runner: linux.10xlarge.avx2}]} 2025-08-14T21:32:40.3278373Z 2025-08-14T21:32:40.3278476Z Is the current job unstable? False 2025-08-14T21:32:40.3278646Z 2025-08-14T21:32:40.3278752Z Is keep-going label set? True 2025-08-14T21:32:40.3278904Z 2025-08-14T21:32:40.3278991Z Reenabled issues? 2025-08-14T21:32:40.3346212Z ##[group]Run echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2025-08-14T21:32:40.3346635Z echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2025-08-14T21:32:40.3352696Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:32:40.3352999Z env: 2025-08-14T21:32:40.3353187Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:40.3353392Z JOB_TIMEOUT: 240 2025-08-14T21:32:40.3353589Z ##[endgroup] 2025-08-14T21:32:40.3420887Z ##[group]Run env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-08-14T21:32:40.3421293Z env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-08-14T21:32:40.3421640Z env | grep '^CI' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-08-14T21:32:40.3426407Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:32:40.3426704Z env: 2025-08-14T21:32:40.3426895Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:40.3427103Z ##[endgroup] 2025-08-14T21:32:40.3563804Z ##[group]Run set -x 2025-08-14T21:32:40.3564088Z set -x 2025-08-14T21:32:40.3564270Z  2025-08-14T21:32:40.3564478Z if [[ $TEST_CONFIG == 'multigpu' ]]; then 2025-08-14T21:32:40.3564790Z  TEST_COMMAND=.ci/pytorch/multigpu-test.sh 2025-08-14T21:32:40.3565096Z elif [[ $BUILD_ENVIRONMENT == *onnx* ]]; then 2025-08-14T21:32:40.3565380Z  TEST_COMMAND=.ci/onnx/test.sh 2025-08-14T21:32:40.3565617Z else 2025-08-14T21:32:40.3565823Z  TEST_COMMAND=.ci/pytorch/test.sh 2025-08-14T21:32:40.3566056Z fi 2025-08-14T21:32:40.3566222Z  2025-08-14T21:32:40.3566432Z # Leaving 1GB for the runner and other things 2025-08-14T21:32:40.3566876Z TOTAL_AVAILABLE_MEMORY_IN_GB=$(awk '/MemTotal/ { printf "%.3f \n", $2/1024/1024 - 1 }' /proc/meminfo) 2025-08-14T21:32:40.3567532Z # https://docs.docker.com/engine/containers/resource_constraints/#--memory-swap-details, the 3GB swap 2025-08-14T21:32:40.3568054Z # comes from https://github.com/pytorch/test-infra/pull/6058 2025-08-14T21:32:40.3568446Z TOTAL_MEMORY_WITH_SWAP=$(("${TOTAL_AVAILABLE_MEMORY_IN_GB%.*}" + 3)) 2025-08-14T21:32:40.3568755Z  2025-08-14T21:32:40.3568967Z if [[ ${BUILD_ENVIRONMENT} == *"s390x"* ]]; then 2025-08-14T21:32:40.3569224Z  SHM_OPTS= 2025-08-14T21:32:40.3569426Z  JENKINS_USER= 2025-08-14T21:32:40.3569696Z  # ensure that docker container cleanly exits in 12 hours 2025-08-14T21:32:40.3570053Z  # if for some reason cleanup action doesn't stop container 2025-08-14T21:32:40.3570351Z  # when job is cancelled 2025-08-14T21:32:40.3570598Z  DOCKER_SHELL_CMD="sleep 12h" 2025-08-14T21:32:40.3570947Z else 2025-08-14T21:32:40.3571143Z  SHM_OPTS="--shm-size=${SHM_SIZE}" 2025-08-14T21:32:40.3571403Z  JENKINS_USER="--user jenkins" 2025-08-14T21:32:40.3571655Z  DOCKER_SHELL_CMD= 2025-08-14T21:32:40.3571857Z fi 2025-08-14T21:32:40.3572027Z  2025-08-14T21:32:40.3572361Z # detached container should get cleaned up by teardown_ec2_linux 2025-08-14T21:32:40.3572868Z # TODO: Stop building test binaries as part of the build phase 2025-08-14T21:32:40.3573318Z # Used for GPU_FLAG, SHM_OPTS, JENKINS_USER and DOCKER_SHELL_CMD since that doesn't play nice 2025-08-14T21:32:40.3573721Z # shellcheck disable=SC2086,SC2090 2025-08-14T21:32:40.3573991Z container_name=$(docker run \ 2025-08-14T21:32:40.3574230Z  ${GPU_FLAG:-} \ 2025-08-14T21:32:40.3574479Z  ${SCCACHE_SERVER_PORT_DOCKER_FLAG:-} \ 2025-08-14T21:32:40.3574746Z  -e BUILD_ENVIRONMENT \ 2025-08-14T21:32:40.3574984Z  -e PR_NUMBER \ 2025-08-14T21:32:40.3575210Z  -e GITHUB_ACTIONS \ 2025-08-14T21:32:40.3575438Z  -e GITHUB_REPOSITORY \ 2025-08-14T21:32:40.3575673Z  -e GITHUB_WORKFLOW \ 2025-08-14T21:32:40.3596790Z  -e GITHUB_JOB \ 2025-08-14T21:32:40.3597077Z  -e GITHUB_RUN_ID \ 2025-08-14T21:32:40.3597335Z  -e GITHUB_RUN_NUMBER \ 2025-08-14T21:32:40.3597579Z  -e GITHUB_RUN_ATTEMPT \ 2025-08-14T21:32:40.3597817Z  -e JOB_ID \ 2025-08-14T21:32:40.3598015Z  -e JOB_NAME \ 2025-08-14T21:32:40.3598222Z  -e BASE_SHA \ 2025-08-14T21:32:40.3598425Z  -e BRANCH \ 2025-08-14T21:32:40.3598616Z  -e SHA1 \ 2025-08-14T21:32:40.3598823Z  -e AWS_DEFAULT_REGION \ 2025-08-14T21:32:40.3599057Z  -e IN_WHEEL_TEST \ 2025-08-14T21:32:40.3599272Z  -e SHARD_NUMBER \ 2025-08-14T21:32:40.3599485Z  -e TEST_CONFIG \ 2025-08-14T21:32:40.3599716Z  -e NUM_TEST_SHARDS \ 2025-08-14T21:32:40.3599949Z  -e REENABLED_ISSUES \ 2025-08-14T21:32:40.3600184Z  -e CONTINUE_THROUGH_ERROR \ 2025-08-14T21:32:40.3600589Z  -e VERBOSE_TEST_LOGS \ 2025-08-14T21:32:40.3600837Z  -e TEST_SHOWLOCALS \ 2025-08-14T21:32:40.3601063Z  -e NO_TEST_TIMEOUT \ 2025-08-14T21:32:40.3601428Z  -e NO_TD \ 2025-08-14T21:32:40.3601740Z  -e TD_DISTRIBUTED \ 2025-08-14T21:32:40.3601958Z  -e PR_LABELS \ 2025-08-14T21:32:40.3602201Z  -e MAX_JOBS="$(nproc --ignore=2)" \ 2025-08-14T21:32:40.3602470Z  -e SCCACHE_BUCKET \ 2025-08-14T21:32:40.3602697Z  -e SCCACHE_REGION \ 2025-08-14T21:32:40.3602905Z  -e XLA_CUDA \ 2025-08-14T21:32:40.3603141Z  -e XLA_CLANG_CACHE_S3_BUCKET_NAME \ 2025-08-14T21:32:40.3603422Z  -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK \ 2025-08-14T21:32:40.3603700Z  -e PYTORCH_TEST_RERUN_DISABLED_TESTS \ 2025-08-14T21:32:40.3603998Z  -e SKIP_SCCACHE_INITIALIZATION=1 \ 2025-08-14T21:32:40.3604262Z  -e HUGGING_FACE_HUB_TOKEN \ 2025-08-14T21:32:40.3604521Z  -e SCRIBE_GRAPHQL_ACCESS_TOKEN \ 2025-08-14T21:32:40.3604768Z  -e DASHBOARD_TAG \ 2025-08-14T21:32:40.3604995Z  -e ARTIFACTS_FILE_SUFFIX \ 2025-08-14T21:32:40.3605279Z  --memory="${TOTAL_AVAILABLE_MEMORY_IN_GB%.*}g" \ 2025-08-14T21:32:40.3605650Z  --memory-swap="${TOTAL_MEMORY_WITH_SWAP}g" \ 2025-08-14T21:32:40.3605970Z  --env-file="/tmp/github_env_${GITHUB_RUN_ID}" \ 2025-08-14T21:32:40.3606274Z  --security-opt seccomp=unconfined \ 2025-08-14T21:32:40.3606527Z  --cap-add=SYS_PTRACE \ 2025-08-14T21:32:40.3606761Z  --ipc=host \ 2025-08-14T21:32:40.3606965Z  ${SHM_OPTS} \ 2025-08-14T21:32:40.3607166Z  --tty \ 2025-08-14T21:32:40.3607347Z  --detach \ 2025-08-14T21:32:40.3607648Z  --name="${container_name}" \ 2025-08-14T21:32:40.3607893Z  ${JENKINS_USER} \ 2025-08-14T21:32:40.3608157Z  -v "${GITHUB_WORKSPACE}:/var/lib/jenkins/workspace" \ 2025-08-14T21:32:40.3608467Z  -w /var/lib/jenkins/workspace \ 2025-08-14T21:32:40.3608719Z  "${DOCKER_IMAGE}" \ 2025-08-14T21:32:40.3608932Z  ${DOCKER_SHELL_CMD} 2025-08-14T21:32:40.3609143Z ) 2025-08-14T21:32:40.3609379Z # Propagate download.pytorch.org IP to container 2025-08-14T21:32:40.3609878Z grep download.pytorch.org /etc/hosts | docker exec -i "${container_name}" sudo bash -c "/bin/cat >> /etc/hosts" 2025-08-14T21:32:40.3610398Z echo "DOCKER_CONTAINER_ID=${container_name}" >> "${GITHUB_ENV}" 2025-08-14T21:32:40.3610711Z  2025-08-14T21:32:40.3610927Z if [[ ${BUILD_ENVIRONMENT} == *"s390x"* ]]; then 2025-08-14T21:32:40.3611355Z  docker exec -t "${container_name}" sh -c "python3 -m pip install -r .ci/docker/requirements-ci.txt" 2025-08-14T21:32:40.3611746Z fi 2025-08-14T21:32:40.3611919Z  2025-08-14T21:32:40.3612292Z docker exec -t "${container_name}" sh -c "python3 -m pip install $(echo dist/*.whl)[opt-einsum] && ${TEST_COMMAND}" 2025-08-14T21:32:40.3618301Z shell: /usr/bin/bash -e {0} 2025-08-14T21:32:40.3618533Z env: 2025-08-14T21:32:40.3618768Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:32:40.3619050Z BUILD_ENVIRONMENT: linux-jammy-py3.9-gcc11-build 2025-08-14T21:32:40.3619330Z PR_NUMBER: 2025-08-14T21:32:40.3619541Z GITHUB_REPOSITORY: pytorch/pytorch 2025-08-14T21:32:40.3619792Z GITHUB_WORKFLOW: inductor-periodic 2025-08-14T21:32:40.3620037Z GITHUB_JOB: test 2025-08-14T21:32:40.3620232Z GITHUB_RUN_ID: 16976338999 2025-08-14T21:32:40.3620447Z GITHUB_RUN_NUMBER: 66307 2025-08-14T21:32:40.3620646Z GITHUB_RUN_ATTEMPT: 1 2025-08-14T21:32:40.3620842Z JOB_ID: 48128301923 2025-08-14T21:32:40.3621358Z JOB_NAME: linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T21:32:40.3621892Z BRANCH: main 2025-08-14T21:32:40.3622106Z SHA1: 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:32:40.3622493Z BASE_SHA: 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:32:40.3622789Z TEST_CONFIG: cpu_inductor_freezing_avx2_huggingface 2025-08-14T21:32:40.3623046Z SHARD_NUMBER: 1 2025-08-14T21:32:40.3623235Z NUM_TEST_SHARDS: 1 2025-08-14T21:32:40.3623433Z REENABLED_ISSUES: 2025-08-14T21:32:40.3623626Z CONTINUE_THROUGH_ERROR: True 2025-08-14T21:32:40.3623848Z VERBOSE_TEST_LOGS: False 2025-08-14T21:32:40.3624062Z TEST_SHOWLOCALS: False 2025-08-14T21:32:40.3624260Z NO_TEST_TIMEOUT: False 2025-08-14T21:32:40.3624458Z NO_TD: False 2025-08-14T21:32:40.3624640Z TD_DISTRIBUTED: False 2025-08-14T21:32:40.3624875Z SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2 2025-08-14T21:32:40.3625152Z SCCACHE_REGION: us-east-1 2025-08-14T21:32:40.3625365Z SHM_SIZE: 1g 2025-08-14T21:32:40.3625974Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:32:40.3626615Z XLA_CUDA: 2025-08-14T21:32:40.3626897Z XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla 2025-08-14T21:32:40.3627244Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK: 0 2025-08-14T21:32:40.3627492Z PYTORCH_TEST_RERUN_DISABLED_TESTS: 0 2025-08-14T21:32:40.3627727Z DASHBOARD_TAG: 2025-08-14T21:32:40.3628091Z HUGGING_FACE_HUB_TOKEN: *** 2025-08-14T21:32:40.3628411Z SCRIBE_GRAPHQL_ACCESS_TOKEN: *** 2025-08-14T21:32:40.3628826Z ARTIFACTS_FILE_SUFFIX: test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923 2025-08-14T21:32:40.3629250Z ##[endgroup] 2025-08-14T21:32:40.3664367Z + [[ cpu_inductor_freezing_avx2_huggingface == \m\u\l\t\i\g\p\u ]] 2025-08-14T21:32:40.3664834Z + [[ linux-jammy-py3.9-gcc11-build == *onnx* ]] 2025-08-14T21:32:40.3665406Z + TEST_COMMAND=.ci/pytorch/test.sh 2025-08-14T21:32:40.3665736Z ++ awk '/MemTotal/ { printf "%.3f \n", $2/1024/1024 - 1 }' /proc/meminfo 2025-08-14T21:32:40.3683728Z + TOTAL_AVAILABLE_MEMORY_IN_GB='156.355 ' 2025-08-14T21:32:40.3684053Z + TOTAL_MEMORY_WITH_SWAP=159 2025-08-14T21:32:40.3684374Z + [[ linux-jammy-py3.9-gcc11-build == *\s\3\9\0\x* ]] 2025-08-14T21:32:40.3684725Z + SHM_OPTS=--shm-size=1g 2025-08-14T21:32:40.3684969Z + JENKINS_USER='--user jenkins' 2025-08-14T21:32:40.3685181Z + DOCKER_SHELL_CMD= 2025-08-14T21:32:40.3694514Z +++ nproc --ignore=2 2025-08-14T21:32:40.3959988Z ++ docker run -e BUILD_ENVIRONMENT -e PR_NUMBER -e GITHUB_ACTIONS -e GITHUB_REPOSITORY -e GITHUB_WORKFLOW -e GITHUB_JOB -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e JOB_ID -e JOB_NAME -e BASE_SHA -e BRANCH -e SHA1 -e AWS_DEFAULT_REGION -e IN_WHEEL_TEST -e SHARD_NUMBER -e TEST_CONFIG -e NUM_TEST_SHARDS -e REENABLED_ISSUES -e CONTINUE_THROUGH_ERROR -e VERBOSE_TEST_LOGS -e TEST_SHOWLOCALS -e NO_TEST_TIMEOUT -e NO_TD -e TD_DISTRIBUTED -e PR_LABELS -e MAX_JOBS=38 -e SCCACHE_BUCKET -e SCCACHE_REGION -e XLA_CUDA -e XLA_CLANG_CACHE_S3_BUCKET_NAME -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK -e PYTORCH_TEST_RERUN_DISABLED_TESTS -e SKIP_SCCACHE_INITIALIZATION=1 -e HUGGING_FACE_HUB_TOKEN -e SCRIBE_GRAPHQL_ACCESS_TOKEN -e DASHBOARD_TAG -e ARTIFACTS_FILE_SUFFIX --memory=156g --memory-swap=159g --env-file=/tmp/github_env_16976338999 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --ipc=host --shm-size=1g --tty --detach --name= --user jenkins -v /home/ec2-user/actions-runner/_work/pytorch/pytorch:/var/lib/jenkins/workspace -w /var/lib/jenkins/workspace 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:33:16.8634044Z + container_name=047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T21:33:16.8634762Z + grep download.pytorch.org /etc/hosts 2025-08-14T21:33:16.8635707Z + docker exec -i 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 sudo bash -c '/bin/cat >> /etc/hosts' 2025-08-14T21:33:16.9939837Z + echo DOCKER_CONTAINER_ID=047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T21:33:16.9940409Z + [[ linux-jammy-py3.9-gcc11-build == *\s\3\9\0\x* ]] 2025-08-14T21:33:16.9942865Z ++ echo dist/torch-2.9.0a0+git1fc683c-cp39-cp39-linux_x86_64.whl 2025-08-14T21:33:16.9949144Z + docker exec -t 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 sh -c 'python3 -m pip install dist/torch-2.9.0a0+git1fc683c-cp39-cp39-linux_x86_64.whl[opt-einsum] && .ci/pytorch/test.sh' 2025-08-14T21:33:17.4739947Z Processing ./dist/torch-2.9.0a0+git1fc683c-cp39-cp39-linux_x86_64.whl (from torch==2.9.0a0+git1fc683c) 2025-08-14T21:33:18.1567568Z Requirement already satisfied: filelock in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (3.18.0) 2025-08-14T21:33:18.1570811Z Requirement already satisfied: typing-extensions>=4.10.0 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (4.14.1) 2025-08-14T21:33:18.1582190Z Requirement already satisfied: sympy>=1.13.3 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (1.13.3) 2025-08-14T21:33:18.1585249Z Requirement already satisfied: networkx>=2.5.1 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (2.8.8) 2025-08-14T21:33:18.1592216Z Requirement already satisfied: jinja2 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (3.1.6) 2025-08-14T21:33:18.1593208Z Requirement already satisfied: fsspec>=0.8.5 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (2025.3.0) 2025-08-14T21:33:18.1609284Z Requirement already satisfied: opt-einsum>=3.3 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (3.3.0) 2025-08-14T21:33:18.2017645Z Requirement already satisfied: numpy>=1.7 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from opt-einsum>=3.3->torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (1.22.4) 2025-08-14T21:33:18.2038100Z Requirement already satisfied: mpmath<1.4,>=1.1.0 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from sympy>=1.13.3->torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (1.3.0) 2025-08-14T21:33:18.2105154Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from jinja2->torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (3.0.2) 2025-08-14T21:33:19.3096374Z Installing collected packages: torch 2025-08-14T21:33:29.9908006Z ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 2025-08-14T21:33:29.9908751Z dall-e 0.1 requires torchvision, which is not installed. 2025-08-14T21:33:29.9909093Z effdet 0.4.1 requires torchvision, which is not installed. 2025-08-14T21:33:29.9909543Z pytorch-labs-segment-anything-fast 0.2 requires torchao, which is not installed. 2025-08-14T21:33:29.9910112Z pytorch-labs-segment-anything-fast 0.2 requires torchvision>=0.17.0.dev20231026, which is not installed. 2025-08-14T21:33:29.9910679Z timm 1.0.14 requires torchvision, which is not installed. 2025-08-14T21:33:29.9911061Z Successfully installed torch-2.9.0a0+git1fc683c 2025-08-14T21:33:30.1087548Z + export TERM=vt100 2025-08-14T21:33:30.1087777Z + TERM=vt100 2025-08-14T21:33:30.1091221Z ++ dirname .ci/pytorch/test.sh 2025-08-14T21:33:30.1101059Z + source .ci/pytorch/common.sh 2025-08-14T21:33:30.1103914Z +++ dirname .ci/pytorch/common.sh 2025-08-14T21:33:30.1109082Z ++ source .ci/pytorch/common_utils.sh 2025-08-14T21:33:30.1110276Z +++ declare -f -t trap_add 2025-08-14T21:33:30.1123051Z ++ set -ex -o pipefail 2025-08-14T21:33:30.1123371Z ++ [[ linux-jammy-py3.9-gcc11-build == *rocm* ]] 2025-08-14T21:33:30.1123704Z ++ BUILD_TEST_LIBTORCH=0 2025-08-14T21:33:30.1130636Z ++ dirname .ci/pytorch/test.sh 2025-08-14T21:33:30.1134078Z + source .ci/pytorch/common-build.sh 2025-08-14T21:33:30.1135291Z ++ [[ linux-jammy-py3.9-gcc11-build != *win-* ]] 2025-08-14T21:33:30.1146055Z ++++ dirname .ci/pytorch/common-build.sh 2025-08-14T21:33:30.1149780Z +++ cd .ci/pytorch 2025-08-14T21:33:30.1149990Z +++ pwd -P 2025-08-14T21:33:30.1151798Z ++ script_dir=/var/lib/jenkins/workspace/.ci/pytorch 2025-08-14T21:33:30.1152141Z ++ [[ linux-jammy-py3.9-gcc11-build == *-pch* ]] 2025-08-14T21:33:30.1152399Z ++ which sccache 2025-08-14T21:33:30.1266793Z ++ [[ -z ossci-compiler-cache-circleci-v2 ]] 2025-08-14T21:33:30.1267191Z ++ sccache --stop-server 2025-08-14T21:33:30.1297011Z ++ true 2025-08-14T21:33:30.1297241Z ++ rm -f /var/lib/jenkins/sccache_error.log 2025-08-14T21:33:30.1305563Z ++ trap_add sccache_epilogue EXIT 2025-08-14T21:33:30.1305882Z ++ trap_add_cmd=sccache_epilogue 2025-08-14T21:33:30.1306131Z ++ shift 2025-08-14T21:33:30.1306332Z ++ for trap_add_name in "$@" 2025-08-14T21:33:30.1311806Z ++++ trap -p EXIT 2025-08-14T21:33:30.1313686Z +++ eval 'extract_trap_cmd ' 2025-08-14T21:33:30.1314081Z ++++ extract_trap_cmd 2025-08-14T21:33:30.1322691Z ++++ printf '%s\n' '' 2025-08-14T21:33:30.1322988Z +++ printf '%s\n' sccache_epilogue 2025-08-14T21:33:30.1323360Z ++ trap -- ' 2025-08-14T21:33:30.1323543Z sccache_epilogue' EXIT 2025-08-14T21:33:30.1323747Z ++ [[ -n 1 ]] 2025-08-14T21:33:30.1324295Z ++ echo 'Skipping sccache server initialization, setting environment variables' 2025-08-14T21:33:30.1325100Z Skipping sccache server initialization, setting environment variables 2025-08-14T21:33:30.1325438Z ++ export SCCACHE_IDLE_TIMEOUT=0 2025-08-14T21:33:30.1325660Z ++ SCCACHE_IDLE_TIMEOUT=0 2025-08-14T21:33:30.1325931Z ++ export SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-08-14T21:33:30.1326268Z ++ SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-08-14T21:33:30.1326833Z ++ export RUST_LOG=sccache::server=error 2025-08-14T21:33:30.1327089Z ++ RUST_LOG=sccache::server=error 2025-08-14T21:33:30.1327318Z ++ sccache --zero-stats 2025-08-14T21:33:30.2762905Z Statistics zeroed. 2025-08-14T21:33:30.2772112Z ++ which ccache 2025-08-14T21:33:30.2833661Z + [[ linux-jammy-py3.9-gcc11-build != *rocm* ]] 2025-08-14T21:33:30.2834048Z + [[ linux-jammy-py3.9-gcc11-build != *s390x* ]] 2025-08-14T21:33:30.2834393Z + [[ -d /var/lib/jenkins/workspace ]] 2025-08-14T21:33:30.2840675Z ++ stat -c %u /var/lib/jenkins/workspace 2025-08-14T21:33:30.2860409Z + WORKSPACE_ORIGINAL_OWNER_ID=1000 2025-08-14T21:33:30.2860723Z + trap_add cleanup_workspace EXIT 2025-08-14T21:33:30.2861003Z + trap_add_cmd=cleanup_workspace 2025-08-14T21:33:30.2861242Z + shift 2025-08-14T21:33:30.2861433Z + for trap_add_name in "$@" 2025-08-14T21:33:30.2861682Z +++ trap -p EXIT 2025-08-14T21:33:30.2862755Z ++ eval 'extract_trap_cmd trap -- '\'' 2025-08-14T21:33:30.2863074Z sccache_epilogue'\'' EXIT' 2025-08-14T21:33:30.2863324Z +++ extract_trap_cmd trap -- ' 2025-08-14T21:33:30.2863741Z sccache_epilogue' EXIT 2025-08-14T21:33:30.2863958Z +++ printf '%s\n' ' 2025-08-14T21:33:30.2864182Z sccache_epilogue' 2025-08-14T21:33:30.2864412Z ++ printf '%s\n' cleanup_workspace 2025-08-14T21:33:30.2864686Z + trap -- ' 2025-08-14T21:33:30.2864877Z sccache_epilogue 2025-08-14T21:33:30.2865078Z cleanup_workspace' EXIT 2025-08-14T21:33:30.2865430Z + sudo chown -R jenkins /var/lib/jenkins/workspace 2025-08-14T21:33:31.0356929Z + git config --global --add safe.directory /var/lib/jenkins/workspace 2025-08-14T21:33:31.0371800Z + echo 'Environment variables:' 2025-08-14T21:33:31.0372082Z Environment variables: 2025-08-14T21:33:31.0372308Z + env 2025-08-14T21:33:31.0385179Z GITHUB_WORKSPACE=/home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-08-14T21:33:31.0385617Z CONTINUE_THROUGH_ERROR=True 2025-08-14T21:33:31.0385927Z BUILD_ENVIRONMENT=linux-jammy-py3.9-gcc11-build 2025-08-14T21:33:31.0386260Z HOSTNAME=047dfac93b61 2025-08-14T21:33:31.0386705Z GITHUB_PATH=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/add_path_2f537f4a-facb-49bf-be24-ce056ff0def0 2025-08-14T21:33:31.0387400Z GITHUB_ACTION=__run_2 2025-08-14T21:33:31.0387615Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2025-08-14T21:33:31.0387862Z GITHUB_RUN_NUMBER=66307 2025-08-14T21:33:31.0388113Z TEST_CONFIG=cpu_inductor_freezing_avx2_huggingface 2025-08-14T21:33:31.0388397Z GITHUB_REPOSITORY_OWNER_ID=21003710 2025-08-14T21:33:31.0388646Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2025-08-14T21:33:31.0388929Z SCCACHE_IDLE_TIMEOUT=0 2025-08-14T21:33:31.0389365Z SCRIBE_GRAPHQL_ACCESS_TOKEN=*** 2025-08-14T21:33:31.0389608Z GITHUB_TRIGGERING_ACTOR=pytorchmergebot 2025-08-14T21:33:31.0389846Z GITHUB_REF_TYPE=branch 2025-08-14T21:33:31.0390052Z TORCH_CUDA_ARCH_LIST=Maxwell 2025-08-14T21:33:31.0390305Z BASE_SHA=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:33:31.0390557Z XLA_CUDA= 2025-08-14T21:33:31.0390838Z NCCL_LIB_DIR=/usr/local/cuda/lib64/ 2025-08-14T21:33:31.0391179Z HUGGING_FACE_HUB_TOKEN=*** 2025-08-14T21:33:31.0391559Z *** 2025-08-14T21:33:31.0391726Z GITHUB_REPOSITORY_ID=65600975 2025-08-14T21:33:31.0391951Z GITHUB_ACTIONS=true 2025-08-14T21:33:31.0392189Z SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-08-14T21:33:31.0392478Z SHA1=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:33:31.0392766Z GITHUB_SHA=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:33:31.0393200Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/inductor-periodic.yml@refs/heads/main 2025-08-14T21:33:31.0393602Z UCC_HOME=/usr 2025-08-14T21:33:31.0393784Z VERBOSE_TEST_LOGS=False 2025-08-14T21:33:31.0393991Z GITHUB_REF=refs/heads/main 2025-08-14T21:33:31.0394200Z SHARD_NUMBER=1 2025-08-14T21:33:31.0394383Z GITHUB_REF_PROTECTED=true 2025-08-14T21:33:31.0394592Z HOME=/var/lib/jenkins 2025-08-14T21:33:31.0394824Z GITHUB_API_URL=https://api.github.com 2025-08-14T21:33:31.0395085Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2025-08-14T21:33:31.0395518Z UCX_COMMIT= 2025-08-14T21:33:31.0395703Z USE_SYSTEM_NCCL=1 2025-08-14T21:33:31.0395889Z NUM_TEST_SHARDS=1 2025-08-14T21:33:31.0396083Z UCX_HOME=/usr 2025-08-14T21:33:31.0396535Z GITHUB_STATE=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/save_state_2f537f4a-facb-49bf-be24-ce056ff0def0 2025-08-14T21:33:31.0397547Z JOB_NAME=linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T21:33:31.0398806Z GITHUB_ENV=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_env_2f537f4a-facb-49bf-be24-ce056ff0def0 2025-08-14T21:33:31.0400054Z GITHUB_EVENT_PATH=/home/ec2-user/actions-runner/_work/_temp/_github_workflow/event.json 2025-08-14T21:33:31.0400675Z GITHUB_EVENT_NAME=schedule 2025-08-14T21:33:31.0400883Z DASHBOARD_TAG= 2025-08-14T21:33:31.0401078Z GITHUB_RUN_ID=16976338999 2025-08-14T21:33:31.0401381Z INSTALLED_OPENBLAS= 2025-08-14T21:33:31.0401837Z GITHUB_STEP_SUMMARY=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_2f537f4a-facb-49bf-be24-ce056ff0def0 2025-08-14T21:33:31.0402357Z GITHUB_ACTOR=pytorchmergebot 2025-08-14T21:33:31.0402570Z PR_NUMBER= 2025-08-14T21:33:31.0402743Z DESIRED_CUDA= 2025-08-14T21:33:31.0402916Z GITHUB_RUN_ATTEMPT=1 2025-08-14T21:33:31.0403126Z ANACONDA_PYTHON_VERSION=3.9 2025-08-14T21:33:31.0403387Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2025-08-14T21:33:31.0403644Z TERM=vt100 2025-08-14T21:33:31.0403813Z INSTALLED_VISION=yes 2025-08-14T21:33:31.0403997Z BRANCH=main 2025-08-14T21:33:31.0404168Z SCCACHE_REGION=us-east-1 2025-08-14T21:33:31.0404386Z OPENSSL_ROOT_DIR=/opt/openssl 2025-08-14T21:33:31.0404606Z CUDA_PATH=/usr/local/cuda 2025-08-14T21:33:31.0404988Z GITHUB_ACTION_PATH=/home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2025-08-14T21:33:31.0411858Z GITHUB_SERVER_URL=https://github.com 2025-08-14T21:33:31.0412100Z UCC_COMMIT= 2025-08-14T21:33:31.0412264Z REENABLED_ISSUES= 2025-08-14T21:33:31.0412458Z DOCS=yes 2025-08-14T21:33:31.0412623Z SHLVL=1 2025-08-14T21:33:31.0412781Z MAX_JOBS=38 2025-08-14T21:33:31.0412962Z GITHUB_ACTOR_ID=97764156 2025-08-14T21:33:31.0413345Z GITHUB_WORKFLOW_SHA=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:33:31.0413639Z GITHUB_REF_NAME=main 2025-08-14T21:33:31.0413950Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2025-08-14T21:33:31.0414278Z GITHUB_JOB=test 2025-08-14T21:33:31.0414472Z NO_TEST_TIMEOUT=False 2025-08-14T21:33:31.0414661Z TD_DISTRIBUTED=False 2025-08-14T21:33:31.0414889Z GITHUB_REPOSITORY=pytorch/pytorch 2025-08-14T21:33:31.0415132Z GITHUB_RETENTION_DAYS=90 2025-08-14T21:33:31.0415343Z OPENSSL_DIR=/opt/openssl 2025-08-14T21:33:31.0415543Z GITHUB_ACTION_REPOSITORY= 2025-08-14T21:33:31.0416115Z PATH=/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.9/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-08-14T21:33:31.0416679Z GITHUB_BASE_REF= 2025-08-14T21:33:31.0416866Z INSTALLED_ACL= 2025-08-14T21:33:31.0417232Z ARTIFACTS_FILE_SUFFIX=test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923 2025-08-14T21:33:31.0417641Z CI=true 2025-08-14T21:33:31.0417827Z GITHUB_REPOSITORY_OWNER=pytorch 2025-08-14T21:33:31.0418089Z RUST_LOG=sccache::server=error 2025-08-14T21:33:31.0418298Z JOB_ID=48128301923 2025-08-14T21:33:31.0418486Z GITHUB_HEAD_REF= 2025-08-14T21:33:31.0418661Z GITHUB_ACTION_REF= 2025-08-14T21:33:31.0418894Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2025-08-14T21:33:31.0419166Z TEST_SHOWLOCALS=False 2025-08-14T21:33:31.0419445Z GITHUB_WORKFLOW=inductor-periodic 2025-08-14T21:33:31.0419684Z DEBIAN_FRONTEND=noninteractive 2025-08-14T21:33:31.0420280Z GITHUB_OUTPUT=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_output_2f537f4a-facb-49bf-be24-ce056ff0def0 2025-08-14T21:33:31.0420747Z NO_TD=False 2025-08-14T21:33:31.0420928Z SKIP_SCCACHE_INITIALIZATION=1 2025-08-14T21:33:31.0421169Z NCCL_INCLUDE_DIR=/usr/local/cuda/include/ 2025-08-14T21:33:31.0421475Z _=/usr/bin/env 2025-08-14T21:33:31.0421742Z ++ python -c 'import site; print(site.getsitepackages()[0])' 2025-08-14T21:33:31.0761943Z + TORCH_INSTALL_DIR=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch 2025-08-14T21:33:31.0762539Z + TORCH_BIN_DIR=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/bin 2025-08-14T21:33:31.0763026Z + TORCH_LIB_DIR=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib 2025-08-14T21:33:31.0763447Z + TORCH_TEST_DIR=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/test 2025-08-14T21:33:31.0763775Z + BUILD_DIR=build 2025-08-14T21:33:31.0763981Z + BUILD_RENAMED_DIR=build_renamed 2025-08-14T21:33:31.0764209Z + BUILD_BIN_DIR=build/bin 2025-08-14T21:33:31.0764412Z + SHARD_NUMBER=1 2025-08-14T21:33:31.0764595Z + NUM_TEST_SHARDS=1 2025-08-14T21:33:31.0764795Z + export TORCH_SERIALIZATION_DEBUG=1 2025-08-14T21:33:31.0765046Z + TORCH_SERIALIZATION_DEBUG=1 2025-08-14T21:33:31.0765269Z + export VALGRIND=ON 2025-08-14T21:33:31.0765460Z + VALGRIND=ON 2025-08-14T21:33:31.0765691Z + [[ linux-jammy-py3.9-gcc11-build == *clang9* ]] 2025-08-14T21:33:31.0765987Z + [[ linux-jammy-py3.9-gcc11-build == *xpu* ]] 2025-08-14T21:33:31.0766279Z + [[ linux-jammy-py3.9-gcc11-build == *s390x* ]] 2025-08-14T21:33:31.0766525Z + [[ 0 == \1 ]] 2025-08-14T21:33:31.0766704Z + [[ True == \1 ]] 2025-08-14T21:33:31.0766919Z + [[ linux-jammy-py3.9-gcc11-build != *bazel* ]] 2025-08-14T21:33:31.0767185Z ++ realpath build/custom_test_artifacts 2025-08-14T21:33:31.0779043Z + CUSTOM_TEST_ARTIFACT_BUILD_DIR=/var/lib/jenkins/workspace/build/custom_test_artifacts 2025-08-14T21:33:31.0779527Z + [[ -n '' ]] 2025-08-14T21:33:31.0779741Z + echo 'Environment variables' 2025-08-14T21:33:31.0780013Z Environment variables 2025-08-14T21:33:31.0780231Z + env 2025-08-14T21:33:31.0819932Z GITHUB_WORKSPACE=/home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-08-14T21:33:31.0820395Z CONTINUE_THROUGH_ERROR=True 2025-08-14T21:33:31.0820684Z BUILD_ENVIRONMENT=linux-jammy-py3.9-gcc11-build 2025-08-14T21:33:31.0820977Z HOSTNAME=047dfac93b61 2025-08-14T21:33:31.0821419Z GITHUB_PATH=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/add_path_2f537f4a-facb-49bf-be24-ce056ff0def0 2025-08-14T21:33:31.0822086Z GITHUB_ACTION=__run_2 2025-08-14T21:33:31.0822304Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2025-08-14T21:33:31.0822538Z GITHUB_RUN_NUMBER=66307 2025-08-14T21:33:31.0822768Z TEST_CONFIG=cpu_inductor_freezing_avx2_huggingface 2025-08-14T21:33:31.0823047Z GITHUB_REPOSITORY_OWNER_ID=21003710 2025-08-14T21:33:31.0823300Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2025-08-14T21:33:31.0823541Z SCCACHE_IDLE_TIMEOUT=0 2025-08-14T21:33:31.0823961Z SCRIBE_GRAPHQL_ACCESS_TOKEN=*** 2025-08-14T21:33:31.0824207Z GITHUB_TRIGGERING_ACTOR=pytorchmergebot 2025-08-14T21:33:31.0824448Z GITHUB_REF_TYPE=branch 2025-08-14T21:33:31.0824650Z TORCH_CUDA_ARCH_LIST=Maxwell 2025-08-14T21:33:31.0824897Z BASE_SHA=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:33:31.0825147Z XLA_CUDA= 2025-08-14T21:33:31.0825328Z NCCL_LIB_DIR=/usr/local/cuda/lib64/ 2025-08-14T21:33:31.0825644Z HUGGING_FACE_HUB_TOKEN=*** 2025-08-14T21:33:31.0825978Z *** 2025-08-14T21:33:31.0826161Z GITHUB_REPOSITORY_ID=65600975 2025-08-14T21:33:31.0826427Z GITHUB_ACTIONS=true 2025-08-14T21:33:31.0826661Z SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-08-14T21:33:31.0826953Z SHA1=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:33:31.0827222Z GITHUB_SHA=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:33:31.0827655Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/inductor-periodic.yml@refs/heads/main 2025-08-14T21:33:31.0828045Z UCC_HOME=/usr 2025-08-14T21:33:31.0828233Z TORCH_SERIALIZATION_DEBUG=1 2025-08-14T21:33:31.0828439Z VERBOSE_TEST_LOGS=False 2025-08-14T21:33:31.0828646Z GITHUB_REF=refs/heads/main 2025-08-14T21:33:31.0828846Z SHARD_NUMBER=1 2025-08-14T21:33:31.0829028Z GITHUB_REF_PROTECTED=true 2025-08-14T21:33:31.0829237Z HOME=/var/lib/jenkins 2025-08-14T21:33:31.0829456Z GITHUB_API_URL=https://api.github.com 2025-08-14T21:33:31.0829816Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2025-08-14T21:33:31.0830047Z UCX_COMMIT= 2025-08-14T21:33:31.0830221Z USE_SYSTEM_NCCL=1 2025-08-14T21:33:31.0830398Z NUM_TEST_SHARDS=1 2025-08-14T21:33:31.0830584Z UCX_HOME=/usr 2025-08-14T21:33:31.0831006Z GITHUB_STATE=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/save_state_2f537f4a-facb-49bf-be24-ce056ff0def0 2025-08-14T21:33:31.0831775Z JOB_NAME=linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T21:33:31.0832540Z GITHUB_ENV=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_env_2f537f4a-facb-49bf-be24-ce056ff0def0 2025-08-14T21:33:31.0833131Z GITHUB_EVENT_PATH=/home/ec2-user/actions-runner/_work/_temp/_github_workflow/event.json 2025-08-14T21:33:31.0833506Z GITHUB_EVENT_NAME=schedule 2025-08-14T21:33:31.0833707Z DASHBOARD_TAG= 2025-08-14T21:33:31.0833894Z GITHUB_RUN_ID=16976338999 2025-08-14T21:33:31.0834108Z INSTALLED_OPENBLAS= 2025-08-14T21:33:31.0834552Z GITHUB_STEP_SUMMARY=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_2f537f4a-facb-49bf-be24-ce056ff0def0 2025-08-14T21:33:31.0835057Z GITHUB_ACTOR=pytorchmergebot 2025-08-14T21:33:31.0835275Z PR_NUMBER= 2025-08-14T21:33:31.0835441Z DESIRED_CUDA= 2025-08-14T21:33:31.0835611Z GITHUB_RUN_ATTEMPT=1 2025-08-14T21:33:31.0835798Z VALGRIND=ON 2025-08-14T21:33:31.0835977Z ANACONDA_PYTHON_VERSION=3.9 2025-08-14T21:33:31.0836226Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2025-08-14T21:33:31.0836487Z TERM=vt100 2025-08-14T21:33:31.0836655Z INSTALLED_VISION=yes 2025-08-14T21:33:31.0836832Z BRANCH=main 2025-08-14T21:33:31.0837008Z SCCACHE_REGION=us-east-1 2025-08-14T21:33:31.0837223Z OPENSSL_ROOT_DIR=/opt/openssl 2025-08-14T21:33:31.0837434Z CUDA_PATH=/usr/local/cuda 2025-08-14T21:33:31.0837814Z GITHUB_ACTION_PATH=/home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2025-08-14T21:33:31.0838234Z GITHUB_SERVER_URL=https://github.com 2025-08-14T21:33:31.0838463Z UCC_COMMIT= 2025-08-14T21:33:31.0838628Z REENABLED_ISSUES= 2025-08-14T21:33:31.0838806Z DOCS=yes 2025-08-14T21:33:31.0838960Z SHLVL=1 2025-08-14T21:33:31.0839117Z MAX_JOBS=38 2025-08-14T21:33:31.0839356Z GITHUB_ACTOR_ID=97764156 2025-08-14T21:33:31.0839613Z GITHUB_WORKFLOW_SHA=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:33:31.0839902Z GITHUB_REF_NAME=main 2025-08-14T21:33:31.0840197Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2025-08-14T21:33:31.0840623Z GITHUB_JOB=test 2025-08-14T21:33:31.0840855Z NO_TEST_TIMEOUT=False 2025-08-14T21:33:31.0841052Z TD_DISTRIBUTED=False 2025-08-14T21:33:31.0841316Z GITHUB_REPOSITORY=pytorch/pytorch 2025-08-14T21:33:31.0841543Z GITHUB_RETENTION_DAYS=90 2025-08-14T21:33:31.0841759Z OPENSSL_DIR=/opt/openssl 2025-08-14T21:33:31.0841966Z GITHUB_ACTION_REPOSITORY= 2025-08-14T21:33:31.0842515Z PATH=/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.9/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-08-14T21:33:31.0843080Z GITHUB_BASE_REF= 2025-08-14T21:33:31.0843262Z INSTALLED_ACL= 2025-08-14T21:33:31.0843627Z ARTIFACTS_FILE_SUFFIX=test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923 2025-08-14T21:33:31.0844039Z CI=true 2025-08-14T21:33:31.0844219Z GITHUB_REPOSITORY_OWNER=pytorch 2025-08-14T21:33:31.0844488Z RUST_LOG=sccache::server=error 2025-08-14T21:33:31.0844697Z JOB_ID=48128301923 2025-08-14T21:33:31.0844881Z GITHUB_HEAD_REF= 2025-08-14T21:33:31.0845115Z GITHUB_ACTION_REF= 2025-08-14T21:33:31.0845335Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2025-08-14T21:33:31.0845603Z TEST_SHOWLOCALS=False 2025-08-14T21:33:31.0845816Z GITHUB_WORKFLOW=inductor-periodic 2025-08-14T21:33:31.0846045Z DEBIAN_FRONTEND=noninteractive 2025-08-14T21:33:31.0846503Z GITHUB_OUTPUT=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_output_2f537f4a-facb-49bf-be24-ce056ff0def0 2025-08-14T21:33:31.0846962Z NO_TD=False 2025-08-14T21:33:31.0847231Z SKIP_SCCACHE_INITIALIZATION=1 2025-08-14T21:33:31.0847471Z NCCL_INCLUDE_DIR=/usr/local/cuda/include/ 2025-08-14T21:33:31.0847709Z _=/usr/bin/env 2025-08-14T21:33:31.0847888Z + echo 'Testing pytorch' 2025-08-14T21:33:31.0848099Z Testing pytorch 2025-08-14T21:33:31.0848302Z + export LANG=C.UTF-8 2025-08-14T21:33:31.0848497Z + LANG=C.UTF-8 2025-08-14T21:33:31.0849026Z + PR_NUMBER= 2025-08-14T21:33:31.0849297Z + [[ cpu_inductor_freezing_avx2_huggingface == \d\e\f\a\u\l\t ]] 2025-08-14T21:33:31.0849668Z + [[ cpu_inductor_freezing_avx2_huggingface == \d\i\s\t\r\i\b\u\t\e\d ]] 2025-08-14T21:33:31.0850020Z + [[ cpu_inductor_freezing_avx2_huggingface == \s\l\o\w ]] 2025-08-14T21:33:31.0850355Z + [[ linux-jammy-py3.9-gcc11-build == *slow-gradcheck* ]] 2025-08-14T21:33:31.0850672Z + [[ linux-jammy-py3.9-gcc11-build == *cuda* ]] 2025-08-14T21:33:31.0850952Z + [[ linux-jammy-py3.9-gcc11-build == *rocm* ]] 2025-08-14T21:33:31.0851234Z + [[ linux-jammy-py3.9-gcc11-build == *xpu* ]] 2025-08-14T21:33:31.0851545Z + [[ cpu_inductor_freezing_avx2_huggingface == *crossref* ]] 2025-08-14T21:33:31.0851853Z + [[ linux-jammy-py3.9-gcc11-build == *rocm* ]] 2025-08-14T21:33:31.0852122Z + [[ linux-jammy-py3.9-gcc11-build == *xpu* ]] 2025-08-14T21:33:31.0852411Z + [[ linux-jammy-py3.9-gcc11-build != *-bazel-* ]] 2025-08-14T21:33:31.0852683Z + pip_install ninja==1.10.2 2025-08-14T21:33:31.0852959Z + pip_install_pkg='python3 -m pip install --progress-bar off' 2025-08-14T21:33:31.0853312Z + python3 -m pip install --progress-bar off ninja==1.10.2 2025-08-14T21:33:31.5621339Z Collecting ninja==1.10.2 2025-08-14T21:33:31.5748069Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl.metadata (5.0 kB) 2025-08-14T21:33:31.5856424Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (108 kB) 2025-08-14T21:33:32.6670525Z Installing collected packages: ninja 2025-08-14T21:33:32.6670889Z Attempting uninstall: ninja 2025-08-14T21:33:32.6684420Z Found existing installation: ninja 1.11.1.3 2025-08-14T21:33:32.6710152Z Uninstalling ninja-1.11.1.3: 2025-08-14T21:33:32.6787560Z Successfully uninstalled ninja-1.11.1.3 2025-08-14T21:33:32.7465306Z Successfully installed ninja-1.10.2 2025-08-14T21:33:32.8630086Z + export PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.9/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-08-14T21:33:32.8631557Z + PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.9/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-08-14T21:33:32.8632372Z + [[ linux-jammy-py3.9-gcc11-build == *aarch64* ]] 2025-08-14T21:33:32.8632707Z + [[ linux-jammy-py3.9-gcc11-build == *asan* ]] 2025-08-14T21:33:32.8633004Z + [[ linux-jammy-py3.9-gcc11-build == *-debug* ]] 2025-08-14T21:33:32.8633324Z + [[ linux-jammy-py3.9-gcc11-build != *-bazel-* ]] 2025-08-14T21:33:32.8633748Z + echo 'We are not in debug mode: linux-jammy-py3.9-gcc11-build. Expect the assertion to pass' 2025-08-14T21:33:32.8634244Z We are not in debug mode: linux-jammy-py3.9-gcc11-build. Expect the assertion to pass 2025-08-14T21:33:32.8636987Z + cd test 2025-08-14T21:33:32.8637266Z + python -c 'import torch; torch._C._crash_if_debug_asserts_fail(424242)' 2025-08-14T21:33:34.6044056Z + [[ cpu_inductor_freezing_avx2_huggingface == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]] 2025-08-14T21:33:34.6044677Z + [[ cpu_inductor_freezing_avx2_huggingface == \n\o\g\p\u\_\A\V\X\5\1\2 ]] 2025-08-14T21:33:34.6045280Z + [[ cpu_inductor_freezing_avx2_huggingface == \l\e\g\a\c\y\_\n\v\i\d\i\a\_\d\r\i\v\e\r ]] 2025-08-14T21:33:34.6050781Z + DYNAMO_BENCHMARK_FLAGS=() 2025-08-14T21:33:34.6055194Z + [[ cpu_inductor_freezing_avx2_huggingface == *pr_time_benchmarks* ]] 2025-08-14T21:33:34.6055600Z + [[ cpu_inductor_freezing_avx2_huggingface == *dynamo_eager* ]] 2025-08-14T21:33:34.6055960Z + [[ cpu_inductor_freezing_avx2_huggingface == *aot_eager* ]] 2025-08-14T21:33:34.6056612Z + [[ cpu_inductor_freezing_avx2_huggingface == *aot_inductor* ]] 2025-08-14T21:33:34.6056994Z + [[ cpu_inductor_freezing_avx2_huggingface == *max_autotune_inductor* ]] 2025-08-14T21:33:34.6057394Z + [[ cpu_inductor_freezing_avx2_huggingface == *inductor* ]] 2025-08-14T21:33:34.6057718Z + [[ cpu_inductor_freezing_avx2_huggingface != *perf* ]] 2025-08-14T21:33:34.6058024Z + DYNAMO_BENCHMARK_FLAGS+=(--inductor) 2025-08-14T21:33:34.6058321Z + [[ cpu_inductor_freezing_avx2_huggingface == *dynamic* ]] 2025-08-14T21:33:34.6058637Z + [[ cpu_inductor_freezing_avx2_huggingface == *cpu* ]] 2025-08-14T21:33:34.6058922Z + DYNAMO_BENCHMARK_FLAGS+=(--device cpu) 2025-08-14T21:33:34.6087290Z + [[ linux-jammy-py3.9-gcc11-build == *libtorch* ]] 2025-08-14T21:33:34.6087679Z + [[ linux-jammy-py3.9-gcc11-build == *-bazel-* ]] 2025-08-14T21:33:34.6094380Z + cd test 2025-08-14T21:33:34.6095081Z + python -c 'import torch; print(torch.__config__.show())' 2025-08-14T21:33:36.0278718Z PyTorch built with: 2025-08-14T21:33:36.0279101Z - GCC 11.4 2025-08-14T21:33:36.0279375Z - C++ Version: 201703 2025-08-14T21:33:36.0280006Z - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications 2025-08-14T21:33:36.0280809Z - Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d) 2025-08-14T21:33:36.0281414Z - OpenMP 201511 (a.k.a. OpenMP 4.5) 2025-08-14T21:33:36.0281833Z - LAPACK is enabled (usually provided by MKL) 2025-08-14T21:33:36.0282211Z - NNPACK is enabled 2025-08-14T21:33:36.0282515Z - CPU capability usage: AVX2 2025-08-14T21:33:36.0287459Z - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=1fc683cf17c8c673044538d10266c00f92987be2, CXX_COMPILER=/opt/cache/bin/c++, CXX_FLAGS= -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -DC10_NODEPRECATED -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -faligned-new -Werror -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.9.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_CUSPARSELT=OFF, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, USE_XCCL=OFF, USE_XPU=OFF, 2025-08-14T21:33:36.0291070Z 2025-08-14T21:33:36.3322519Z + cd test 2025-08-14T21:33:36.3322916Z + python -c 'import torch; print(torch.__config__.parallel_info())' 2025-08-14T21:33:37.7216527Z ATen/Parallel: 2025-08-14T21:33:37.7216928Z at::get_num_threads() : 20 2025-08-14T21:33:37.7217328Z at::get_num_interop_threads() : 20 2025-08-14T21:33:37.7217690Z OpenMP 201511 (a.k.a. OpenMP 4.5) 2025-08-14T21:33:37.7218014Z omp_get_max_threads() : 20 2025-08-14T21:33:37.7218668Z Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications 2025-08-14T21:33:37.7219325Z mkl_get_max_threads() : 20 2025-08-14T21:33:37.7219776Z Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d) 2025-08-14T21:33:37.7220270Z std::thread::hardware_concurrency() : 40 2025-08-14T21:33:37.7220639Z Environment variables: 2025-08-14T21:33:37.7220939Z OMP_NUM_THREADS : [not set] 2025-08-14T21:33:37.7221247Z MKL_NUM_THREADS : [not set] 2025-08-14T21:33:37.7228267Z ATen parallel backend: OpenMP 2025-08-14T21:33:37.7228500Z 2025-08-14T21:33:37.9894814Z + [[ cpu_inductor_freezing_avx2_huggingface == *numpy_2* ]] 2025-08-14T21:33:37.9895285Z + [[ linux-jammy-py3.9-gcc11-build == *aarch64* ]] 2025-08-14T21:33:37.9895902Z + [[ cpu_inductor_freezing_avx2_huggingface == *backward* ]] 2025-08-14T21:33:37.9896230Z + [[ cpu_inductor_freezing_avx2_huggingface == *xla* ]] 2025-08-14T21:33:37.9896581Z + [[ cpu_inductor_freezing_avx2_huggingface == *executorch* ]] 2025-08-14T21:33:37.9896947Z + [[ cpu_inductor_freezing_avx2_huggingface == \j\i\t\_\l\e\g\a\c\y ]] 2025-08-14T21:33:37.9897290Z + [[ linux-jammy-py3.9-gcc11-build == *libtorch* ]] 2025-08-14T21:33:37.9897622Z + [[ cpu_inductor_freezing_avx2_huggingface == distributed ]] 2025-08-14T21:33:37.9897986Z + [[ cpu_inductor_freezing_avx2_huggingface == *operator_benchmark* ]] 2025-08-14T21:33:37.9898376Z + [[ cpu_inductor_freezing_avx2_huggingface == *inductor_distributed* ]] 2025-08-14T21:33:37.9898759Z + [[ cpu_inductor_freezing_avx2_huggingface == *inductor-halide* ]] 2025-08-14T21:33:37.9899150Z + [[ cpu_inductor_freezing_avx2_huggingface == *inductor-triton-cpu* ]] 2025-08-14T21:33:37.9899561Z + [[ cpu_inductor_freezing_avx2_huggingface == *inductor-micro-benchmark* ]] 2025-08-14T21:33:37.9899949Z + [[ cpu_inductor_freezing_avx2_huggingface == *huggingface* ]] 2025-08-14T21:33:37.9900246Z + install_torchvision 2025-08-14T21:33:37.9900454Z + local orig_preload 2025-08-14T21:33:37.9900653Z + local commit 2025-08-14T21:33:37.9900838Z ++ get_pinned_commit vision 2025-08-14T21:33:37.9901071Z ++ cat .github/ci_commit_pins/vision.txt 2025-08-14T21:33:37.9928173Z + commit=966da7e46f65d6d49df3e31214470a4fe5cc8e66 2025-08-14T21:33:37.9928521Z + orig_preload= 2025-08-14T21:33:37.9928750Z + '[' -n '' ']' 2025-08-14T21:33:37.9929015Z + [[ linux-jammy-py3.9-gcc11-build == *cuda* ]] 2025-08-14T21:33:37.9929732Z + pip_build_and_install git+https://github.com/pytorch/vision.git@966da7e46f65d6d49df3e31214470a4fe5cc8e66 dist/vision 2025-08-14T21:33:37.9930438Z + local build_target=git+https://github.com/pytorch/vision.git@966da7e46f65d6d49df3e31214470a4fe5cc8e66 2025-08-14T21:33:37.9930900Z + local wheel_dir=dist/vision 2025-08-14T21:33:37.9931139Z + local found_whl=0 2025-08-14T21:33:37.9931364Z + for file in "${wheel_dir}"/*.whl 2025-08-14T21:33:37.9931739Z + [[ -f dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl ]] 2025-08-14T21:33:37.9932107Z + found_whl=1 2025-08-14T21:33:37.9932287Z + break 2025-08-14T21:33:37.9942445Z + '[' 1 == 0 ']' 2025-08-14T21:33:37.9942735Z + for file in "${wheel_dir}"/*.whl 2025-08-14T21:33:37.9943207Z + pip_install_whl dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-08-14T21:33:37.9943707Z + args=('dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl') 2025-08-14T21:33:37.9944050Z + local args 2025-08-14T21:33:37.9944358Z + [[ dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl == *\ * ]] 2025-08-14T21:33:37.9944727Z + for path in "${args[@]}" 2025-08-14T21:33:37.9945089Z + echo 'Installing dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl' 2025-08-14T21:33:37.9945590Z Installing dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-08-14T21:33:37.9946157Z + python3 -mpip install --no-index --no-deps dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-08-14T21:33:38.4087972Z Processing ./dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-08-14T21:33:38.4219145Z Installing collected packages: torchvision 2025-08-14T21:33:38.9430596Z Successfully installed torchvision-0.22.0a0+966da7e 2025-08-14T21:33:38.9933402Z + '[' -n '' ']' 2025-08-14T21:33:38.9933635Z + id=0 2025-08-14T21:33:38.9933847Z + test_dynamo_benchmark huggingface 0 2025-08-14T21:33:38.9936285Z ++ pwd 2025-08-14T21:33:38.9942159Z + TEST_REPORTS_DIR=/var/lib/jenkins/workspace/test/test-reports 2025-08-14T21:33:38.9942490Z + local suite=huggingface 2025-08-14T21:33:38.9942707Z + shift 2025-08-14T21:33:38.9942890Z + local shard_id=0 2025-08-14T21:33:38.9943078Z + shift 2025-08-14T21:33:38.9943330Z + [[ cpu_inductor_freezing_avx2_huggingface == *perf_compare* ]] 2025-08-14T21:33:38.9943689Z + [[ cpu_inductor_freezing_avx2_huggingface == *perf* ]] 2025-08-14T21:33:38.9944282Z + [[ cpu_inductor_freezing_avx2_huggingface == *cpu* ]] 2025-08-14T21:33:38.9944554Z + local dt=float32 2025-08-14T21:33:38.9944789Z + [[ cpu_inductor_freezing_avx2_huggingface == *amp* ]] 2025-08-14T21:33:38.9945129Z + [[ cpu_inductor_freezing_avx2_huggingface == *freezing* ]] 2025-08-14T21:33:38.9945537Z + test_single_dynamo_benchmark inference huggingface 0 --inference --float32 --freezing 2025-08-14T21:33:38.9949135Z ++ pwd 2025-08-14T21:33:38.9951031Z + TEST_REPORTS_DIR=/var/lib/jenkins/workspace/test/test-reports 2025-08-14T21:33:38.9951382Z + mkdir -p /var/lib/jenkins/workspace/test/test-reports 2025-08-14T21:33:38.9975120Z + local name=inference 2025-08-14T21:33:38.9975360Z + shift 2025-08-14T21:33:38.9975546Z + local suite=huggingface 2025-08-14T21:33:38.9975772Z + shift 2025-08-14T21:33:38.9975950Z + local shard_id=0 2025-08-14T21:33:38.9976130Z + shift 2025-08-14T21:33:38.9976291Z + partition_flags=() 2025-08-14T21:33:38.9976496Z + local partition_flags 2025-08-14T21:33:38.9976699Z + [[ -n 1 ]] 2025-08-14T21:33:38.9976868Z + [[ -n 0 ]] 2025-08-14T21:33:38.9977177Z + partition_flags=(--total-partitions "$NUM_TEST_SHARDS" --partition-id "$shard_id") 2025-08-14T21:33:38.9977615Z + [[ cpu_inductor_freezing_avx2_huggingface == *perf_compare* ]] 2025-08-14T21:33:38.9977948Z + [[ cpu_inductor_freezing_avx2_huggingface == *perf* ]] 2025-08-14T21:33:38.9978261Z + [[ cpu_inductor_freezing_avx2_huggingface == *_avx2* ]] 2025-08-14T21:33:38.9978580Z + TEST_CONFIG=cpu_inductor_freezing_huggingface 2025-08-14T21:33:38.9978880Z + [[ cpu_inductor_freezing_huggingface == *_avx512* ]] 2025-08-14T21:33:38.9979840Z + python benchmarks/dynamo/huggingface.py --ci --accuracy --timing --explain --print-compilation-time --inductor --device cpu --inference --float32 --freezing --total-partitions 1 --partition-id 0 --output /var/lib/jenkins/workspace/test/test-reports/inference_huggingface.csv 2025-08-14T21:33:43.8575492Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:33:43.8578055Z from pkg_resources import resource_filename 2025-08-14T21:33:44.4712517Z 2025-08-14T21:33:44.4768902Z config.json: 0% 0.00/694 [00:00bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9470926Z 2025-08-14T21:36:43.9471115Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9471772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9472387Z layer_outputs = layer_module( 2025-08-14T21:36:43.9472819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9473277Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9473811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9474326Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9474860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9475427Z self_outputs = self.self( 2025-08-14T21:36:43.9475926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9476485Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9477114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9477807Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:36:43.9494904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:36:43.9495466Z hidden_states = hidden_states.view( 2025-08-14T21:36:43.9495662Z 2025-08-14T21:36:43.9495807Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9496484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9497137Z layer_outputs = layer_module( 2025-08-14T21:36:43.9497724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9498200Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9498746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9499277Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9499976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9500562Z self_outputs = self.self( 2025-08-14T21:36:43.9501064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9501618Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9502257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9503008Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9503323Z 2025-08-14T21:36:43.9503468Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9504114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9504795Z layer_outputs = layer_module( 2025-08-14T21:36:43.9505232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9505690Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9506211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9506798Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9507328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9507833Z self_outputs = self.self( 2025-08-14T21:36:43.9508333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9508888Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9509515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9510252Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9510576Z 2025-08-14T21:36:43.9510715Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9511383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9512005Z layer_outputs = layer_module( 2025-08-14T21:36:43.9512434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9512891Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9513421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9513947Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9518624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9519143Z self_outputs = self.self( 2025-08-14T21:36:43.9519643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9520192Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9520891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9521712Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9522024Z 2025-08-14T21:36:43.9522140Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9522401Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9522662Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9522928Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9523218Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9523876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9524497Z layer_outputs = layer_module( 2025-08-14T21:36:43.9524938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9525397Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9525929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9526459Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9526979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9527488Z self_outputs = self.self( 2025-08-14T21:36:43.9527998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 536, in forward 2025-08-14T21:36:43.9528637Z diagonal_mask = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9529381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 834, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9530073Z self._mask_invalid_locations(diagonal_attention_scores, window_overlap) 2025-08-14T21:36:43.9530743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 762, in _mask_invalid_locations 2025-08-14T21:36:43.9531427Z input_tensor[:, :affected_seq_len, :, : affected_seq_len + 1] = torch.full_like( 2025-08-14T21:36:43.9531689Z 2025-08-14T21:36:43.9531795Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9532097Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9532746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9533419Z layer_outputs = layer_module( 2025-08-14T21:36:43.9533852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9534311Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9534840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9535362Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9535873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9536390Z self_outputs = self.self( 2025-08-14T21:36:43.9536887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:36:43.9537395Z attn_scores += diagonal_mask 2025-08-14T21:36:43.9537555Z 2025-08-14T21:36:43.9537689Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9538392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9539000Z layer_outputs = layer_module( 2025-08-14T21:36:43.9539421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9539872Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9540398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9540916Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9541426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9541940Z self_outputs = self.self( 2025-08-14T21:36:43.9542434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:36:43.9542972Z attn_probs = nn.functional.softmax( 2025-08-14T21:36:43.9547366Z 2025-08-14T21:36:43.9547509Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9548157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9549154Z layer_outputs = layer_module( 2025-08-14T21:36:43.9549599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9550057Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9550597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9551120Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9551801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9552307Z self_outputs = self.self( 2025-08-14T21:36:43.9552806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9553380Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9554038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9554776Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:36:43.9555314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:43.9555752Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:43.9555949Z 2025-08-14T21:36:43.9556077Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9556726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9557329Z layer_outputs = layer_module( 2025-08-14T21:36:43.9557829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9558332Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9558855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9559379Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9559894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9560401Z self_outputs = self.self( 2025-08-14T21:36:43.9560902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9561673Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9562382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9563080Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:36:43.9563719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:36:43.9564303Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:36:43.9564724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:43.9565164Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:43.9565372Z 2025-08-14T21:36:43.9565503Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9566158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9566797Z layer_outputs = layer_module( 2025-08-14T21:36:43.9567233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9567691Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9568208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9568732Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9569248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9569844Z self_outputs = self.self( 2025-08-14T21:36:43.9570336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9570911Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9571569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9580629Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:43.9580976Z 2025-08-14T21:36:43.9581129Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9582014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9582862Z layer_outputs = layer_module( 2025-08-14T21:36:43.9583311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9583769Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9584310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9584845Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9585362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9585890Z self_outputs = self.self( 2025-08-14T21:36:43.9586382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9587048Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9587708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9588419Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:43.9588689Z 2025-08-14T21:36:43.9588880Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9589518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9590113Z layer_outputs = layer_module( 2025-08-14T21:36:43.9590543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9590992Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9591510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9592020Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9592563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9593081Z self_outputs = self.self( 2025-08-14T21:36:43.9593589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:36:43.9594253Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:36:43.9594560Z 2025-08-14T21:36:43.9594667Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9594915Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9595205Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9595843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9596445Z layer_outputs = layer_module( 2025-08-14T21:36:43.9596926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9597377Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9597897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:36:43.9598425Z layer_output = apply_chunking_to_forward( 2025-08-14T21:36:43.9598931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:36:43.9599424Z return forward_fn(*input_tensors) 2025-08-14T21:36:43.9599940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:36:43.9600499Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:36:43.9601203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:36:43.9601839Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:36:43.9602323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:36:43.9602744Z return self.act(input) 2025-08-14T21:36:43.9602890Z 2025-08-14T21:36:43.9602988Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9603246Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9603522Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9604170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9604782Z layer_outputs = layer_module( 2025-08-14T21:36:43.9605217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9605669Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9606194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9606771Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9607288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9607835Z self_outputs = self.self( 2025-08-14T21:36:43.9608343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9608896Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9609513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9610255Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9610573Z 2025-08-14T21:36:43.9610673Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9610959Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9611597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9612204Z layer_outputs = layer_module( 2025-08-14T21:36:43.9612638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9613094Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9613610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9614130Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9614656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9623073Z self_outputs = self.self( 2025-08-14T21:36:43.9623746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9624508Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9625372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9626197Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:36:43.9626817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:36:43.9627341Z hidden_states = hidden_states.view( 2025-08-14T21:36:43.9627510Z 2025-08-14T21:36:43.9627647Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9628291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9628901Z layer_outputs = layer_module( 2025-08-14T21:36:43.9629332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9629788Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9630377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9630957Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9631481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9631987Z self_outputs = self.self( 2025-08-14T21:36:43.9632483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9633041Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9633737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9634470Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9634839Z 2025-08-14T21:36:43.9634969Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9635613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9636226Z layer_outputs = layer_module( 2025-08-14T21:36:43.9636651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9637123Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9637650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9638176Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9638684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9639195Z self_outputs = self.self( 2025-08-14T21:36:43.9639687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9640232Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9640854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9641655Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9642025Z 2025-08-14T21:36:43.9642160Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9642799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9643405Z layer_outputs = layer_module( 2025-08-14T21:36:43.9643841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9644298Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9649332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9649858Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9650376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9650888Z self_outputs = self.self( 2025-08-14T21:36:43.9651380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9651935Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9652563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9653284Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9653601Z 2025-08-14T21:36:43.9653704Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9653968Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9654254Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9654890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9655500Z layer_outputs = layer_module( 2025-08-14T21:36:43.9656090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9656548Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9657068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9657602Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9658123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9658627Z self_outputs = self.self( 2025-08-14T21:36:43.9659193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:36:43.9659766Z attn_scores += diagonal_mask 2025-08-14T21:36:43.9659920Z 2025-08-14T21:36:43.9660060Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9660692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9661298Z layer_outputs = layer_module( 2025-08-14T21:36:43.9661733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9662191Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9662702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9663221Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9663794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9664378Z self_outputs = self.self( 2025-08-14T21:36:43.9664868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:36:43.9665388Z attn_probs = nn.functional.softmax( 2025-08-14T21:36:43.9665557Z 2025-08-14T21:36:43.9665663Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9665940Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9666586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9667196Z layer_outputs = layer_module( 2025-08-14T21:36:43.9667639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9668078Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9668597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9669121Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9669631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9670139Z self_outputs = self.self( 2025-08-14T21:36:43.9670636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9671204Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9671857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9672590Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:36:43.9673119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:43.9677695Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:43.9677890Z 2025-08-14T21:36:43.9678080Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9678722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9679335Z layer_outputs = layer_module( 2025-08-14T21:36:43.9679761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9680218Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9680735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9681338Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9681854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9682367Z self_outputs = self.self( 2025-08-14T21:36:43.9682863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9683437Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9684085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9684772Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:36:43.9685406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:36:43.9685989Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:36:43.9686462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:43.9686901Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:43.9687090Z 2025-08-14T21:36:43.9687229Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9687859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9688579Z layer_outputs = layer_module( 2025-08-14T21:36:43.9689011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9689467Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9689980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9690495Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9691013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9691526Z self_outputs = self.self( 2025-08-14T21:36:43.9692010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9692575Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9693284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9693990Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:43.9694249Z 2025-08-14T21:36:43.9694377Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9695015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9695620Z layer_outputs = layer_module( 2025-08-14T21:36:43.9696090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9696546Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9697071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9697587Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9698097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9698605Z self_outputs = self.self( 2025-08-14T21:36:43.9699097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9699661Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9700317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9701020Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:43.9701278Z 2025-08-14T21:36:43.9701417Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9702063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9706899Z layer_outputs = layer_module( 2025-08-14T21:36:43.9707337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9707800Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9708321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9708916Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9709445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9709959Z self_outputs = self.self( 2025-08-14T21:36:43.9710447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:36:43.9711108Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:36:43.9711419Z 2025-08-14T21:36:43.9711518Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9711777Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9712062Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9712706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9713314Z layer_outputs = layer_module( 2025-08-14T21:36:43.9713739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9714191Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9714712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:36:43.9715245Z layer_output = apply_chunking_to_forward( 2025-08-14T21:36:43.9715744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:36:43.9716242Z return forward_fn(*input_tensors) 2025-08-14T21:36:43.9716766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:36:43.9717400Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:36:43.9718059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:36:43.9718624Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:36:43.9719101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:36:43.9719521Z return self.act(input) 2025-08-14T21:36:43.9719667Z 2025-08-14T21:36:43.9719769Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9720020Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9720307Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9720937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9721614Z layer_outputs = layer_module( 2025-08-14T21:36:43.9722085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9722532Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9723058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9723578Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9724095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9724603Z self_outputs = self.self( 2025-08-14T21:36:43.9725099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9725710Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9726391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9727185Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9727506Z 2025-08-14T21:36:43.9727606Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9727895Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9728545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9729146Z layer_outputs = layer_module( 2025-08-14T21:36:43.9729576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9730033Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9730548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9731074Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9740028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9740723Z self_outputs = self.self( 2025-08-14T21:36:43.9741368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9742114Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9742816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9743515Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:36:43.9744130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:36:43.9744664Z hidden_states = hidden_states.view( 2025-08-14T21:36:43.9744836Z 2025-08-14T21:36:43.9744981Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9745705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9748429Z layer_outputs = layer_module( 2025-08-14T21:36:43.9749200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9749662Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9750186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9750712Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9751235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9751755Z self_outputs = self.self( 2025-08-14T21:36:43.9752251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9752803Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9753421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9754161Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9754472Z 2025-08-14T21:36:43.9754601Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9755235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9755947Z layer_outputs = layer_module( 2025-08-14T21:36:43.9756379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9756821Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9757343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9757864Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9758379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9758894Z self_outputs = self.self( 2025-08-14T21:36:43.9759388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9759942Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9760625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9761486Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9761800Z 2025-08-14T21:36:43.9761935Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9762573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9763166Z layer_outputs = layer_module( 2025-08-14T21:36:43.9763600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9764048Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9764577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9765105Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9765677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9766262Z self_outputs = self.self( 2025-08-14T21:36:43.9766752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9767300Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9767926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9768662Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9768969Z 2025-08-14T21:36:43.9769071Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9769327Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9769616Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9770260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9770859Z layer_outputs = layer_module( 2025-08-14T21:36:43.9771291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9771747Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9772267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9772795Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9773313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9773819Z self_outputs = self.self( 2025-08-14T21:36:43.9774351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:36:43.9774860Z attn_scores += diagonal_mask 2025-08-14T21:36:43.9775015Z 2025-08-14T21:36:43.9779339Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9779983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9780582Z layer_outputs = layer_module( 2025-08-14T21:36:43.9781021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9781472Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9781983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9782497Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9783026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9783545Z self_outputs = self.self( 2025-08-14T21:36:43.9784031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:36:43.9784549Z attn_probs = nn.functional.softmax( 2025-08-14T21:36:43.9784717Z 2025-08-14T21:36:43.9784822Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9785101Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9785732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9786335Z layer_outputs = layer_module( 2025-08-14T21:36:43.9786764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9787208Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9787788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9788307Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9788822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9789328Z self_outputs = self.self( 2025-08-14T21:36:43.9789889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9790519Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9791181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9791914Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:36:43.9792450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:43.9792884Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:43.9793080Z 2025-08-14T21:36:43.9793211Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9793855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9794518Z layer_outputs = layer_module( 2025-08-14T21:36:43.9794947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9795393Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9795920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9796501Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9797023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9797528Z self_outputs = self.self( 2025-08-14T21:36:43.9798020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9798583Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9799237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9799916Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:36:43.9800556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:36:43.9801217Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:36:43.9801636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:43.9802086Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:43.9802294Z 2025-08-14T21:36:43.9802427Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9803083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9803695Z layer_outputs = layer_module( 2025-08-14T21:36:43.9808377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9808832Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9809366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9809891Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9810474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9810990Z self_outputs = self.self( 2025-08-14T21:36:43.9811477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9812052Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9812710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9813411Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:43.9813672Z 2025-08-14T21:36:43.9813805Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9814452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9815057Z layer_outputs = layer_module( 2025-08-14T21:36:43.9815485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9815933Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9816457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9816973Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9817490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9817999Z self_outputs = self.self( 2025-08-14T21:36:43.9818491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9819267Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9819923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9820632Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:43.9820904Z 2025-08-14T21:36:43.9821037Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9821683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9822285Z layer_outputs = layer_module( 2025-08-14T21:36:43.9822723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9823233Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9823770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9824289Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9824814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9825330Z self_outputs = self.self( 2025-08-14T21:36:43.9825880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:36:43.9826541Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:36:43.9826854Z 2025-08-14T21:36:43.9826961Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9827222Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9827504Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9828203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9828811Z layer_outputs = layer_module( 2025-08-14T21:36:43.9829245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9829693Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9830222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:36:43.9830777Z layer_output = apply_chunking_to_forward( 2025-08-14T21:36:43.9831287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:36:43.9831778Z return forward_fn(*input_tensors) 2025-08-14T21:36:43.9832314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:36:43.9832888Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:36:43.9837673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:36:43.9838239Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:36:43.9838712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:36:43.9839144Z return self.act(input) 2025-08-14T21:36:43.9839282Z 2025-08-14T21:36:43.9839380Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9839641Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9839931Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9840560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9841312Z layer_outputs = layer_module( 2025-08-14T21:36:43.9841753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9842203Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9842714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9843232Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9843751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9844263Z self_outputs = self.self( 2025-08-14T21:36:43.9844752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9845312Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9845940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9846674Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9846985Z 2025-08-14T21:36:43.9847088Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9847375Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9848132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9849034Z layer_outputs = layer_module( 2025-08-14T21:36:43.9849474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9849934Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9850460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9851044Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9851564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9852086Z self_outputs = self.self( 2025-08-14T21:36:43.9852762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9853312Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9853940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9854637Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:36:43.9855257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:36:43.9855793Z hidden_states = hidden_states.view( 2025-08-14T21:36:43.9855969Z 2025-08-14T21:36:43.9856096Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9856737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9857335Z layer_outputs = layer_module( 2025-08-14T21:36:43.9857769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9858230Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9858752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9859392Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9859917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9860438Z self_outputs = self.self( 2025-08-14T21:36:43.9860926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9861487Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9866312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9867060Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9867374Z 2025-08-14T21:36:43.9867524Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9868161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9868772Z layer_outputs = layer_module( 2025-08-14T21:36:43.9869208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9869648Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9870172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9870693Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9871206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9871706Z self_outputs = self.self( 2025-08-14T21:36:43.9872191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9872743Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9873460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9874187Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9874503Z 2025-08-14T21:36:43.9874633Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9875278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9875888Z layer_outputs = layer_module( 2025-08-14T21:36:43.9876320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9876850Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9877421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9877935Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9878455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9878965Z self_outputs = self.self( 2025-08-14T21:36:43.9879462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9880012Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9880635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9881449Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9881813Z 2025-08-14T21:36:43.9881922Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9882171Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9882462Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9883110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9883710Z layer_outputs = layer_module( 2025-08-14T21:36:43.9884143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9884601Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9885136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9885703Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9886223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9886741Z self_outputs = self.self( 2025-08-14T21:36:43.9887229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:36:43.9887748Z attn_scores += diagonal_mask 2025-08-14T21:36:43.9887913Z 2025-08-14T21:36:43.9888043Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9888685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9889278Z layer_outputs = layer_module( 2025-08-14T21:36:43.9889719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9890173Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9890697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9899680Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9900380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9900899Z self_outputs = self.self( 2025-08-14T21:36:43.9901388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:36:43.9901904Z attn_probs = nn.functional.softmax( 2025-08-14T21:36:43.9902081Z 2025-08-14T21:36:43.9902178Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9902463Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9903096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9903711Z layer_outputs = layer_module( 2025-08-14T21:36:43.9904134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9904585Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9905104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9907756Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9908274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9908775Z self_outputs = self.self( 2025-08-14T21:36:43.9909272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9909848Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9910580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9911319Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:36:43.9911851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:43.9912285Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:43.9912476Z 2025-08-14T21:36:43.9912611Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9913245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9913855Z layer_outputs = layer_module( 2025-08-14T21:36:43.9914288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9914749Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9915267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9915787Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9916303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9916805Z self_outputs = self.self( 2025-08-14T21:36:43.9917301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9917873Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9918534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9919214Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:36:43.9919963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:36:43.9920672Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:36:43.9921151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:43.9921585Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:43.9921784Z 2025-08-14T21:36:43.9921913Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9922556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9923166Z layer_outputs = layer_module( 2025-08-14T21:36:43.9923591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9924049Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9924580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9925146Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9925667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9926178Z self_outputs = self.self( 2025-08-14T21:36:43.9926669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9927229Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9927888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9928643Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:43.9928904Z 2025-08-14T21:36:43.9929045Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9929682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9930284Z layer_outputs = layer_module( 2025-08-14T21:36:43.9930715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9931163Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9931677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9932198Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9932712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9933216Z self_outputs = self.self( 2025-08-14T21:36:43.9933712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:43.9934277Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:43.9954369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:43.9955253Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:43.9955522Z 2025-08-14T21:36:43.9955671Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9956338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9956972Z layer_outputs = layer_module( 2025-08-14T21:36:43.9957429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9958106Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9958644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9959193Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9959729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9960256Z self_outputs = self.self( 2025-08-14T21:36:43.9960753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:36:43.9961539Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:36:43.9961848Z 2025-08-14T21:36:43.9961964Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9962217Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9962512Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9963166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9968019Z layer_outputs = layer_module( 2025-08-14T21:36:43.9968452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9968914Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9969441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:36:43.9969977Z layer_output = apply_chunking_to_forward( 2025-08-14T21:36:43.9970474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:36:43.9971066Z return forward_fn(*input_tensors) 2025-08-14T21:36:43.9971587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:36:43.9972151Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:36:43.9972718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:36:43.9973285Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:36:43.9973763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:36:43.9974184Z return self.act(input) 2025-08-14T21:36:43.9974329Z 2025-08-14T21:36:43.9974433Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9974692Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9974983Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9975631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9976242Z layer_outputs = layer_module( 2025-08-14T21:36:43.9976679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9977131Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9977664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9978260Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9978843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9979354Z self_outputs = self.self( 2025-08-14T21:36:43.9979854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9980460Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9981078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9981811Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:43.9982126Z 2025-08-14T21:36:43.9982227Z cudagraph partition due to non gpu ops 2025-08-14T21:36:43.9982518Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9983215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9983827Z layer_outputs = layer_module( 2025-08-14T21:36:43.9984267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9984719Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9985250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9985773Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9986295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9986815Z self_outputs = self.self( 2025-08-14T21:36:43.9987302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:43.9987855Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:43.9988482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:43.9989224Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:36:43.9989849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:36:43.9990375Z hidden_states = hidden_states.view( 2025-08-14T21:36:43.9990548Z 2025-08-14T21:36:43.9990689Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:43.9991324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:43.9991930Z layer_outputs = layer_module( 2025-08-14T21:36:43.9992367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:43.9997025Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:43.9997547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:43.9998066Z self_attn_outputs = self.attention( 2025-08-14T21:36:43.9998583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:43.9999096Z self_outputs = self.self( 2025-08-14T21:36:43.9999586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0000142Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0000782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0001581Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0001903Z 2025-08-14T21:36:44.0002032Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0002749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0003362Z layer_outputs = layer_module( 2025-08-14T21:36:44.0003792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0004249Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0004777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0005302Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0005817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0006337Z self_outputs = self.self( 2025-08-14T21:36:44.0006838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0007476Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0008137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0008885Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0009199Z 2025-08-14T21:36:44.0009342Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0009974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0010586Z layer_outputs = layer_module( 2025-08-14T21:36:44.0011017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0011524Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0012089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0012613Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0013135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0013645Z self_outputs = self.self( 2025-08-14T21:36:44.0014125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0014673Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0015295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0016022Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0016335Z 2025-08-14T21:36:44.0016440Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0016702Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0016986Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0017622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0018224Z layer_outputs = layer_module( 2025-08-14T21:36:44.0018651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0019101Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0019617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0020142Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0020708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0021216Z self_outputs = self.self( 2025-08-14T21:36:44.0030202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:36:44.0030891Z attn_scores += diagonal_mask 2025-08-14T21:36:44.0031079Z 2025-08-14T21:36:44.0031232Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0032103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0032792Z layer_outputs = layer_module( 2025-08-14T21:36:44.0033223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0033685Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0034208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0034729Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0035248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0035756Z self_outputs = self.self( 2025-08-14T21:36:44.0036310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:36:44.0036886Z attn_probs = nn.functional.softmax( 2025-08-14T21:36:44.0037054Z 2025-08-14T21:36:44.0037168Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0037744Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0038476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0039083Z layer_outputs = layer_module( 2025-08-14T21:36:44.0039510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0039951Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0040476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0041092Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0041628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0042145Z self_outputs = self.self( 2025-08-14T21:36:44.0042639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0043211Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0043869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0044618Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:36:44.0045205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0045642Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0045835Z 2025-08-14T21:36:44.0045962Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0046601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0047214Z layer_outputs = layer_module( 2025-08-14T21:36:44.0047643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0048142Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0048665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0049569Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0050080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0050665Z self_outputs = self.self( 2025-08-14T21:36:44.0051220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0051782Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0052458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0053198Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:36:44.0053833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:36:44.0054409Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:36:44.0054828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0055260Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0055449Z 2025-08-14T21:36:44.0055582Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0056223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0056936Z layer_outputs = layer_module( 2025-08-14T21:36:44.0057362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0057804Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0058322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0058842Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0059356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0059860Z self_outputs = self.self( 2025-08-14T21:36:44.0060351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0060918Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0061583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0062287Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0062559Z 2025-08-14T21:36:44.0062689Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0063328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0063928Z layer_outputs = layer_module( 2025-08-14T21:36:44.0064347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0064800Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0071560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0072080Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0072675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0073187Z self_outputs = self.self( 2025-08-14T21:36:44.0073681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0074238Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0074893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0075591Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0075848Z 2025-08-14T21:36:44.0075987Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0076631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0077244Z layer_outputs = layer_module( 2025-08-14T21:36:44.0077677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0078126Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0078652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0079182Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0079769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0080334Z self_outputs = self.self( 2025-08-14T21:36:44.0080825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:36:44.0081611Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:36:44.0081916Z 2025-08-14T21:36:44.0082031Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0082290Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0082583Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0083238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0083843Z layer_outputs = layer_module( 2025-08-14T21:36:44.0084331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0084788Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0085307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:36:44.0085836Z layer_output = apply_chunking_to_forward( 2025-08-14T21:36:44.0086344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:36:44.0086845Z return forward_fn(*input_tensors) 2025-08-14T21:36:44.0087362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:36:44.0087920Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:36:44.0088487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:36:44.0089046Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:36:44.0089517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:36:44.0089957Z return self.act(input) 2025-08-14T21:36:44.0090106Z 2025-08-14T21:36:44.0090204Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0090455Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0090798Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0091447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0092053Z layer_outputs = layer_module( 2025-08-14T21:36:44.0092482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0092924Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0093448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0093971Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0098718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0099248Z self_outputs = self.self( 2025-08-14T21:36:44.0099754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0100314Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0100932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0101672Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0101981Z 2025-08-14T21:36:44.0102089Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0102381Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0103016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0103681Z layer_outputs = layer_module( 2025-08-14T21:36:44.0104119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0104570Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0105096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0105619Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0106140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0106653Z self_outputs = self.self( 2025-08-14T21:36:44.0107146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0107708Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0108339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0109147Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:36:44.0109779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:36:44.0110301Z hidden_states = hidden_states.view( 2025-08-14T21:36:44.0110473Z 2025-08-14T21:36:44.0110613Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0111245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0111853Z layer_outputs = layer_module( 2025-08-14T21:36:44.0112288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0112732Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0113373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0113905Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0114429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0114930Z self_outputs = self.self( 2025-08-14T21:36:44.0115429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0115982Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0116608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0117338Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0117651Z 2025-08-14T21:36:44.0117784Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0118427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0119040Z layer_outputs = layer_module( 2025-08-14T21:36:44.0119463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0119917Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0120447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0120958Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0121610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0122121Z self_outputs = self.self( 2025-08-14T21:36:44.0122621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0127407Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0128032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0128758Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0129066Z 2025-08-14T21:36:44.0129204Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0129841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0130451Z layer_outputs = layer_module( 2025-08-14T21:36:44.0130887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0131340Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0131858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0132379Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0132894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0133396Z self_outputs = self.self( 2025-08-14T21:36:44.0133886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0134442Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0135073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0135847Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0136164Z 2025-08-14T21:36:44.0136265Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0136526Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0136814Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0137265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0137357Z layer_outputs = layer_module( 2025-08-14T21:36:44.0137722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0137829Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0138241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0138356Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0138715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0138814Z self_outputs = self.self( 2025-08-14T21:36:44.0139169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:36:44.0139261Z attn_scores += diagonal_mask 2025-08-14T21:36:44.0139273Z 2025-08-14T21:36:44.0139412Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0139854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0140000Z layer_outputs = layer_module( 2025-08-14T21:36:44.0140279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0140383Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0140747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0140841Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0141191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0141286Z self_outputs = self.self( 2025-08-14T21:36:44.0141639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:36:44.0141746Z attn_probs = nn.functional.softmax( 2025-08-14T21:36:44.0141762Z 2025-08-14T21:36:44.0141858Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0141992Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0142503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0142592Z layer_outputs = layer_module( 2025-08-14T21:36:44.0142879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0142977Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0143329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0143429Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0143785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0143878Z self_outputs = self.self( 2025-08-14T21:36:44.0144302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0144453Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0144906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0145127Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:36:44.0145370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0145499Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0145512Z 2025-08-14T21:36:44.0145638Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0146090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0146182Z layer_outputs = layer_module( 2025-08-14T21:36:44.0146466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0146578Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0146933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0147036Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0147389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0147473Z self_outputs = self.self( 2025-08-14T21:36:44.0147833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0148039Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0148489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0148666Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:36:44.0149430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:36:44.0149553Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:36:44.0149798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0149918Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0149932Z 2025-08-14T21:36:44.0150066Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0150514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0150614Z layer_outputs = layer_module( 2025-08-14T21:36:44.0150895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0150993Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0151357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0151454Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0151820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0151906Z self_outputs = self.self( 2025-08-14T21:36:44.0156430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0156591Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0157143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0157337Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0157357Z 2025-08-14T21:36:44.0157486Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0157931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0158025Z layer_outputs = layer_module( 2025-08-14T21:36:44.0158304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0158403Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0158769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0158868Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0159232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0159319Z self_outputs = self.self( 2025-08-14T21:36:44.0159671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0159823Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0160268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0160466Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0160547Z 2025-08-14T21:36:44.0160677Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0161200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0161298Z layer_outputs = layer_module( 2025-08-14T21:36:44.0161576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0161683Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0162038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0162132Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0162496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0162584Z self_outputs = self.self( 2025-08-14T21:36:44.0162937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:36:44.0163182Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:36:44.0163195Z 2025-08-14T21:36:44.0163298Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0163403Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0163529Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0163977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0164082Z layer_outputs = layer_module( 2025-08-14T21:36:44.0164359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0164471Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0164822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:36:44.0164973Z layer_output = apply_chunking_to_forward( 2025-08-14T21:36:44.0165318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:36:44.0165414Z return forward_fn(*input_tensors) 2025-08-14T21:36:44.0165777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:36:44.0165927Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:36:44.0166281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:36:44.0166446Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:36:44.0166794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:36:44.0166886Z return self.act(input) 2025-08-14T21:36:44.0166899Z 2025-08-14T21:36:44.0167040Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0167147Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0167285Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0167727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0167816Z layer_outputs = layer_module( 2025-08-14T21:36:44.0168104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0168203Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0168557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0168705Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0169061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0169159Z self_outputs = self.self( 2025-08-14T21:36:44.0169512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0169636Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0170077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0170309Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0170323Z 2025-08-14T21:36:44.0170423Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0170551Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0171003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0171126Z layer_outputs = layer_module( 2025-08-14T21:36:44.0171417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0171521Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0171875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0171970Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0172330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0172414Z self_outputs = self.self( 2025-08-14T21:36:44.0172773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0172950Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0173380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0173581Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:36:44.0173935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:36:44.0174029Z hidden_states = hidden_states.view( 2025-08-14T21:36:44.0174042Z 2025-08-14T21:36:44.0174177Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0174619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0174716Z layer_outputs = layer_module( 2025-08-14T21:36:44.0175000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0175097Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0175510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0175603Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0175956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0176048Z self_outputs = self.self( 2025-08-14T21:36:44.0176400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0176535Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0177009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0177243Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0177262Z 2025-08-14T21:36:44.0177388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0177831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0177928Z layer_outputs = layer_module( 2025-08-14T21:36:44.0178205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0178303Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0178661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0178757Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0179122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0179207Z self_outputs = self.self( 2025-08-14T21:36:44.0179559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0179693Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0180120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0180354Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0180367Z 2025-08-14T21:36:44.0180493Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0180982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0189487Z layer_outputs = layer_module( 2025-08-14T21:36:44.0189869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0189979Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0190475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0190576Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0191073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0191164Z self_outputs = self.self( 2025-08-14T21:36:44.0191658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0191821Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0192257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0192492Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0192505Z 2025-08-14T21:36:44.0192602Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0192702Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0192839Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0193290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0193389Z layer_outputs = layer_module( 2025-08-14T21:36:44.0193730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0193832Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0194196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0194290Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0194641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0194737Z self_outputs = self.self( 2025-08-14T21:36:44.0195090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:36:44.0195186Z attn_scores += diagonal_mask 2025-08-14T21:36:44.0195198Z 2025-08-14T21:36:44.0195324Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0197890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0197991Z layer_outputs = layer_module( 2025-08-14T21:36:44.0198272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0198378Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0198734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0198829Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0199189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0199273Z self_outputs = self.self( 2025-08-14T21:36:44.0199625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:36:44.0199733Z attn_probs = nn.functional.softmax( 2025-08-14T21:36:44.0199746Z 2025-08-14T21:36:44.0199893Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0200027Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0200473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0200560Z layer_outputs = layer_module( 2025-08-14T21:36:44.0200845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0200943Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0201379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0201473Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0201827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0201924Z self_outputs = self.self( 2025-08-14T21:36:44.0202272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0202420Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0202871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0203087Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:36:44.0203336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0203455Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0203514Z 2025-08-14T21:36:44.0203641Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0204095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0204183Z layer_outputs = layer_module( 2025-08-14T21:36:44.0204472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0204569Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0204923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0205023Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0205379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0205476Z self_outputs = self.self( 2025-08-14T21:36:44.0205826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0205974Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0206425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0206589Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:36:44.0206997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:36:44.0207108Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:36:44.0207349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0207473Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0207488Z 2025-08-14T21:36:44.0207614Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0208127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0208226Z layer_outputs = layer_module( 2025-08-14T21:36:44.0208506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0208609Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0208965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0209058Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0209425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0209513Z self_outputs = self.self( 2025-08-14T21:36:44.0209870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0210017Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0210578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0210773Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0210786Z 2025-08-14T21:36:44.0210912Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0211365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0211454Z layer_outputs = layer_module( 2025-08-14T21:36:44.0211789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0211894Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0212251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0212346Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0212705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0212790Z self_outputs = self.self( 2025-08-14T21:36:44.0213152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0213292Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0213742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0213945Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0213957Z 2025-08-14T21:36:44.0214091Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0214549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0214638Z layer_outputs = layer_module( 2025-08-14T21:36:44.0214916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0215021Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0215374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0215479Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0215835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0215921Z self_outputs = self.self( 2025-08-14T21:36:44.0216326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:36:44.0216562Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:36:44.0216576Z 2025-08-14T21:36:44.0216674Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0216797Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0216954Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0217410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0217500Z layer_outputs = layer_module( 2025-08-14T21:36:44.0217782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0217886Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0218245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:36:44.0218357Z layer_output = apply_chunking_to_forward( 2025-08-14T21:36:44.0218687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:36:44.0218786Z return forward_fn(*input_tensors) 2025-08-14T21:36:44.0219155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:36:44.0219292Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:36:44.0219643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:36:44.0219840Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:36:44.0220114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:36:44.0220214Z return self.act(input) 2025-08-14T21:36:44.0220228Z 2025-08-14T21:36:44.0220325Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0220419Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0220556Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0220998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0221097Z layer_outputs = layer_module( 2025-08-14T21:36:44.0221374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0221475Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0221839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0221934Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0222286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0222384Z self_outputs = self.self( 2025-08-14T21:36:44.0222737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0222869Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0223305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0223534Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0223550Z 2025-08-14T21:36:44.0223651Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0223834Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0224286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0224374Z layer_outputs = layer_module( 2025-08-14T21:36:44.0228900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0229007Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0229364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0229460Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0229826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0229916Z self_outputs = self.self( 2025-08-14T21:36:44.0230277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0230402Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0230831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0231035Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:36:44.0231389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:36:44.0231491Z hidden_states = hidden_states.view( 2025-08-14T21:36:44.0231504Z 2025-08-14T21:36:44.0231685Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0232135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0232232Z layer_outputs = layer_module( 2025-08-14T21:36:44.0232513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0232620Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0232977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0233073Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0233432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0233522Z self_outputs = self.self( 2025-08-14T21:36:44.0233875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0234004Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0234435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0234670Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0234683Z 2025-08-14T21:36:44.0234810Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0235255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0235351Z layer_outputs = layer_module( 2025-08-14T21:36:44.0235629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0235740Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0236142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0236234Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0236594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0236678Z self_outputs = self.self( 2025-08-14T21:36:44.0237043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0237165Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0237590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0237828Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0237840Z 2025-08-14T21:36:44.0237967Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0238424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0238514Z layer_outputs = layer_module( 2025-08-14T21:36:44.0238792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0238904Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0239326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0239420Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0239837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0239971Z self_outputs = self.self( 2025-08-14T21:36:44.0240339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0240463Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0240890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0241217Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0241231Z 2025-08-14T21:36:44.0241330Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0241436Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0241567Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0242016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0242116Z layer_outputs = layer_module( 2025-08-14T21:36:44.0242399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0242498Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0242859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0242953Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0243312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0243404Z self_outputs = self.self( 2025-08-14T21:36:44.0243816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:36:44.0243924Z attn_scores += diagonal_mask 2025-08-14T21:36:44.0243937Z 2025-08-14T21:36:44.0244072Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0244572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0244661Z layer_outputs = layer_module( 2025-08-14T21:36:44.0244938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0245042Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0245398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0245499Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0245849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0245938Z self_outputs = self.self( 2025-08-14T21:36:44.0246300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:36:44.0246401Z attn_probs = nn.functional.softmax( 2025-08-14T21:36:44.0246414Z 2025-08-14T21:36:44.0246511Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0246647Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0247092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0247186Z layer_outputs = layer_module( 2025-08-14T21:36:44.0247461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0247559Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0247984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0248082Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0248442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0248532Z self_outputs = self.self( 2025-08-14T21:36:44.0249260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0249424Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0249871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0250089Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:36:44.0250346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0250470Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0250484Z 2025-08-14T21:36:44.0250621Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0251063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0251152Z layer_outputs = layer_module( 2025-08-14T21:36:44.0251437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0251534Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0251901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0251995Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0252355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0252450Z self_outputs = self.self( 2025-08-14T21:36:44.0252920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0253071Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0253513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0257855Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:36:44.0258265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:36:44.0258378Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:36:44.0258627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0258756Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0258772Z 2025-08-14T21:36:44.0258897Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0259347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0259436Z layer_outputs = layer_module( 2025-08-14T21:36:44.0259715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0259819Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0260173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0260274Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0260701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0260792Z self_outputs = self.self( 2025-08-14T21:36:44.0261154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0261295Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0261748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0261938Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0261950Z 2025-08-14T21:36:44.0262079Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0262525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0262620Z layer_outputs = layer_module( 2025-08-14T21:36:44.0262914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0263015Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0263371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0263475Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0263831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0263917Z self_outputs = self.self( 2025-08-14T21:36:44.0264276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0264423Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0264933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0265122Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0265136Z 2025-08-14T21:36:44.0265264Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0265719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0265806Z layer_outputs = layer_module( 2025-08-14T21:36:44.0266090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0266188Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0266542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0266646Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0267003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0267090Z self_outputs = self.self( 2025-08-14T21:36:44.0267453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:36:44.0267683Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:36:44.0267696Z 2025-08-14T21:36:44.0267806Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0267901Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0268097Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0268597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0268730Z layer_outputs = layer_module( 2025-08-14T21:36:44.0269021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0269120Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0269472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:36:44.0269584Z layer_output = apply_chunking_to_forward( 2025-08-14T21:36:44.0269914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:36:44.0270018Z return forward_fn(*input_tensors) 2025-08-14T21:36:44.0270382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:36:44.0270522Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:36:44.0270882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:36:44.0271022Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:36:44.0271291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:36:44.0271387Z return self.act(input) 2025-08-14T21:36:44.0271400Z 2025-08-14T21:36:44.0271497Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0271598Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0271723Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0272180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0272285Z layer_outputs = layer_module( 2025-08-14T21:36:44.0272571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0272754Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0273120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0273215Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0273575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0273661Z self_outputs = self.self( 2025-08-14T21:36:44.0274015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0274148Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0274581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0274829Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0274842Z 2025-08-14T21:36:44.0274942Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0275066Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0275521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0275610Z layer_outputs = layer_module( 2025-08-14T21:36:44.0275896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0275992Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0276344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0276489Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0276846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0276930Z self_outputs = self.self( 2025-08-14T21:36:44.0277290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0277413Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0277849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0278040Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:36:44.0278394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:36:44.0278498Z hidden_states = hidden_states.view( 2025-08-14T21:36:44.0278511Z 2025-08-14T21:36:44.0278638Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0279104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0279192Z layer_outputs = layer_module( 2025-08-14T21:36:44.0279467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0279571Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0279921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0280020Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0280375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0280461Z self_outputs = self.self( 2025-08-14T21:36:44.0281437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0281566Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0281994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0282229Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0282242Z 2025-08-14T21:36:44.0282370Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0287071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0287164Z layer_outputs = layer_module( 2025-08-14T21:36:44.0287440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0287547Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0287898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0288003Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0288356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0288445Z self_outputs = self.self( 2025-08-14T21:36:44.0288803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0288924Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0289408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0289638Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0289651Z 2025-08-14T21:36:44.0289781Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0290236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0290324Z layer_outputs = layer_module( 2025-08-14T21:36:44.0290611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0290708Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0291060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0291167Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0291519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0291605Z self_outputs = self.self( 2025-08-14T21:36:44.0291962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0292083Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0292517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0292743Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0292756Z 2025-08-14T21:36:44.0292856Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0292962Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0293092Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0293593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0293682Z layer_outputs = layer_module( 2025-08-14T21:36:44.0293959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0294064Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0294418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0294512Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0294875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0294961Z self_outputs = self.self( 2025-08-14T21:36:44.0295326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:36:44.0295417Z attn_scores += diagonal_mask 2025-08-14T21:36:44.0295430Z 2025-08-14T21:36:44.0295559Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0296012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0296100Z layer_outputs = layer_module( 2025-08-14T21:36:44.0296389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0296485Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0296840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0296995Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0297418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0297554Z self_outputs = self.self( 2025-08-14T21:36:44.0297921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:36:44.0298020Z attn_probs = nn.functional.softmax( 2025-08-14T21:36:44.0298032Z 2025-08-14T21:36:44.0298137Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0298267Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0298712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0298816Z layer_outputs = layer_module( 2025-08-14T21:36:44.0299097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0299208Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0299569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0299665Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0300030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0300118Z self_outputs = self.self( 2025-08-14T21:36:44.0300478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0300625Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0301071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0301301Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:36:44.0301598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0301767Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0301790Z 2025-08-14T21:36:44.0301920Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0302361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0302454Z layer_outputs = layer_module( 2025-08-14T21:36:44.0302733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0302829Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0303195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0303294Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0303660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0303745Z self_outputs = self.self( 2025-08-14T21:36:44.0304097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0304245Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0304688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0304859Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:36:44.0305260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:36:44.0305414Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:36:44.0305664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0305782Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0305795Z 2025-08-14T21:36:44.0305921Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0306377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0306467Z layer_outputs = layer_module( 2025-08-14T21:36:44.0306750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0306846Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0307199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0307303Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0307661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0307753Z self_outputs = self.self( 2025-08-14T21:36:44.0308105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0308246Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0308697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0308887Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0308900Z 2025-08-14T21:36:44.0309038Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0309528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0309620Z layer_outputs = layer_module( 2025-08-14T21:36:44.0309907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0310006Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0310359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0310465Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0310818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0310911Z self_outputs = self.self( 2025-08-14T21:36:44.0311265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0311410Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0316105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0316297Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0316309Z 2025-08-14T21:36:44.0316447Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0316890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0316978Z layer_outputs = layer_module( 2025-08-14T21:36:44.0317262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0317410Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0317779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0317875Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0318228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0318322Z self_outputs = self.self( 2025-08-14T21:36:44.0318680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:36:44.0318911Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:36:44.0318934Z 2025-08-14T21:36:44.0319031Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0319128Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0319267Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0319717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0319804Z layer_outputs = layer_module( 2025-08-14T21:36:44.0320095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0320194Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0320558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:36:44.0320665Z layer_output = apply_chunking_to_forward( 2025-08-14T21:36:44.0321001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:36:44.0321193Z return forward_fn(*input_tensors) 2025-08-14T21:36:44.0321556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:36:44.0321744Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:36:44.0322096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:36:44.0322234Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:36:44.0322511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:36:44.0322600Z return self.act(input) 2025-08-14T21:36:44.0322612Z 2025-08-14T21:36:44.0322709Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0322813Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0322940Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0323395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0323492Z layer_outputs = layer_module( 2025-08-14T21:36:44.0323773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0323887Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0324256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0324355Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0324726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0324815Z self_outputs = self.self( 2025-08-14T21:36:44.0325185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0325353Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0325789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0326102Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0326115Z 2025-08-14T21:36:44.0326214Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0326347Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0326845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0326932Z layer_outputs = layer_module( 2025-08-14T21:36:44.0327219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0327318Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0327678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0327774Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0328130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0328224Z self_outputs = self.self( 2025-08-14T21:36:44.0328579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0328701Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0329137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0329331Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:36:44.0329697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:36:44.0329841Z hidden_states = hidden_states.view( 2025-08-14T21:36:44.0329855Z 2025-08-14T21:36:44.0329983Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0330433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0330521Z layer_outputs = layer_module( 2025-08-14T21:36:44.0330844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0330940Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0331294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0331395Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0331751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0331842Z self_outputs = self.self( 2025-08-14T21:36:44.0332195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0332316Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0332749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0332975Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0332988Z 2025-08-14T21:36:44.0333123Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0333664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0333751Z layer_outputs = layer_module( 2025-08-14T21:36:44.0334037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0334134Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0334485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0334584Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0334991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0335085Z self_outputs = self.self( 2025-08-14T21:36:44.0335439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0335568Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0336004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0336231Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0336243Z 2025-08-14T21:36:44.0336376Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0336819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0336906Z layer_outputs = layer_module( 2025-08-14T21:36:44.0337188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0337286Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0337650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0337796Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0338152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0338248Z self_outputs = self.self( 2025-08-14T21:36:44.0338601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0338724Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0339164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0339392Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0339408Z 2025-08-14T21:36:44.0339517Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0339617Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0339746Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0340199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0340287Z layer_outputs = layer_module( 2025-08-14T21:36:44.0349127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0349245Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0349733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0349840Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0350448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0350540Z self_outputs = self.self( 2025-08-14T21:36:44.0351037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:36:44.0351134Z attn_scores += diagonal_mask 2025-08-14T21:36:44.0351148Z 2025-08-14T21:36:44.0351304Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0351837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0351925Z layer_outputs = layer_module( 2025-08-14T21:36:44.0352215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0352310Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0352677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0352770Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0353127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0353223Z self_outputs = self.self( 2025-08-14T21:36:44.0353577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:36:44.0353678Z attn_probs = nn.functional.softmax( 2025-08-14T21:36:44.0353698Z 2025-08-14T21:36:44.0353805Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0353935Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0354391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0354484Z layer_outputs = layer_module( 2025-08-14T21:36:44.0354829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0354937Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0357513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0357617Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0357969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0358054Z self_outputs = self.self( 2025-08-14T21:36:44.0358420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0358564Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0359018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0359238Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:36:44.0359484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0359617Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0359630Z 2025-08-14T21:36:44.0359758Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0360207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0360302Z layer_outputs = layer_module( 2025-08-14T21:36:44.0360582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0360736Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0361180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0361276Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0361637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0361721Z self_outputs = self.self( 2025-08-14T21:36:44.0362077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0362218Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0362661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0362835Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:36:44.0363241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:36:44.0363359Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:36:44.0363603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0363723Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0363735Z 2025-08-14T21:36:44.0363869Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0364316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0364404Z layer_outputs = layer_module( 2025-08-14T21:36:44.0364687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0364788Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0365210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0365304Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0365656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0365748Z self_outputs = self.self( 2025-08-14T21:36:44.0366100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0366250Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0366693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0366885Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0366898Z 2025-08-14T21:36:44.0367034Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0367479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0367572Z layer_outputs = layer_module( 2025-08-14T21:36:44.0367852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0367951Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0368312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0368406Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0368759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0368894Z self_outputs = self.self( 2025-08-14T21:36:44.0369248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0369397Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0369914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0370151Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0370164Z 2025-08-14T21:36:44.0370300Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0370743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0370848Z layer_outputs = layer_module( 2025-08-14T21:36:44.0371127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0371228Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0371591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0371685Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0372048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0372137Z self_outputs = self.self( 2025-08-14T21:36:44.0372488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:36:44.0372732Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:36:44.0372748Z 2025-08-14T21:36:44.0372852Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0372950Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0390063Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0390619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0390731Z layer_outputs = layer_module( 2025-08-14T21:36:44.0391039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0391149Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0391533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:36:44.0391643Z layer_output = apply_chunking_to_forward( 2025-08-14T21:36:44.0391991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:36:44.0392103Z return forward_fn(*input_tensors) 2025-08-14T21:36:44.0392473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:36:44.0392625Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:36:44.0392985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:36:44.0393129Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:36:44.0393412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:36:44.0393504Z return self.act(input) 2025-08-14T21:36:44.0393520Z 2025-08-14T21:36:44.0393633Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0393794Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0393927Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0394398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0394494Z layer_outputs = layer_module( 2025-08-14T21:36:44.0394783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0394896Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0395252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0395364Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0395718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0395811Z self_outputs = self.self( 2025-08-14T21:36:44.0396177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0396310Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0396748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0396996Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0397010Z 2025-08-14T21:36:44.0397111Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0397254Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0397705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0397796Z layer_outputs = layer_module( 2025-08-14T21:36:44.0398090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0398241Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0398736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0398836Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0399256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0399354Z self_outputs = self.self( 2025-08-14T21:36:44.0399717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0399860Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0400294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0400493Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:36:44.0400861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:36:44.0400961Z hidden_states = hidden_states.view( 2025-08-14T21:36:44.0400974Z 2025-08-14T21:36:44.0401188Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0401636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0401726Z layer_outputs = layer_module( 2025-08-14T21:36:44.0402018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0402118Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0402523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0402637Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0402990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0403088Z self_outputs = self.self( 2025-08-14T21:36:44.0403507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0403633Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0404074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0404311Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0404329Z 2025-08-14T21:36:44.0404469Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0404919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0405008Z layer_outputs = layer_module( 2025-08-14T21:36:44.0405294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0405394Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0405754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0405850Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0406203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0406302Z self_outputs = self.self( 2025-08-14T21:36:44.0406658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0406824Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0407262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0407494Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0407507Z 2025-08-14T21:36:44.0407644Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0408087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0408177Z layer_outputs = layer_module( 2025-08-14T21:36:44.0408476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0408575Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0408945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0409041Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0409394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0409492Z self_outputs = self.self( 2025-08-14T21:36:44.0409844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0409980Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0410408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0410677Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0410689Z 2025-08-14T21:36:44.0410803Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0410903Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0411032Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0411490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0411581Z layer_outputs = layer_module( 2025-08-14T21:36:44.0411871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0411973Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0412328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0412440Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0412796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0412892Z self_outputs = self.self( 2025-08-14T21:36:44.0417411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:36:44.0417511Z attn_scores += diagonal_mask 2025-08-14T21:36:44.0417524Z 2025-08-14T21:36:44.0417666Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0418111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0418212Z layer_outputs = layer_module( 2025-08-14T21:36:44.0418491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0418594Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0419024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0419121Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0419476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0419578Z self_outputs = self.self( 2025-08-14T21:36:44.0419930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:36:44.0420038Z attn_probs = nn.functional.softmax( 2025-08-14T21:36:44.0420051Z 2025-08-14T21:36:44.0420152Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0420281Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0420738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0420834Z layer_outputs = layer_module( 2025-08-14T21:36:44.0421121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0421221Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0421574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0421675Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0422029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0422115Z self_outputs = self.self( 2025-08-14T21:36:44.0422473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0422664Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0423128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0423345Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:36:44.0423591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0423729Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0423742Z 2025-08-14T21:36:44.0423871Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0424323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0424417Z layer_outputs = layer_module( 2025-08-14T21:36:44.0424697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0424806Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0425163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0425256Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0425622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0425708Z self_outputs = self.self( 2025-08-14T21:36:44.0426065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0426209Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0426665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0426898Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:36:44.0427308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:36:44.0427433Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:36:44.0427750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0427874Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0427888Z 2025-08-14T21:36:44.0428068Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0428520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0428624Z layer_outputs = layer_module( 2025-08-14T21:36:44.0428901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0429003Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0429369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0429464Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0429818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0429914Z self_outputs = self.self( 2025-08-14T21:36:44.0430271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0430425Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0430925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0431119Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0431133Z 2025-08-14T21:36:44.0431269Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0431715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0431814Z layer_outputs = layer_module( 2025-08-14T21:36:44.0432103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0432247Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0432609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0432708Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0433071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0433159Z self_outputs = self.self( 2025-08-14T21:36:44.0433513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0433664Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0434107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0434305Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0434319Z 2025-08-14T21:36:44.0434456Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0434916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0435055Z layer_outputs = layer_module( 2025-08-14T21:36:44.0435338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0435435Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0435799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0435891Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0436244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0436343Z self_outputs = self.self( 2025-08-14T21:36:44.0436699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:36:44.0436947Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:36:44.0436961Z 2025-08-14T21:36:44.0437063Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0437163Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0437298Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0437740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0437833Z layer_outputs = layer_module( 2025-08-14T21:36:44.0438111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0438205Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0438562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:36:44.0438710Z layer_output = apply_chunking_to_forward( 2025-08-14T21:36:44.0439042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:36:44.0439143Z return forward_fn(*input_tensors) 2025-08-14T21:36:44.0439500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:36:44.0439642Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:36:44.0439994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:36:44.0440130Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:36:44.0440403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:36:44.0440493Z return self.act(input) 2025-08-14T21:36:44.0440506Z 2025-08-14T21:36:44.0440612Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0440708Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0440838Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0441360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0441449Z layer_outputs = layer_module( 2025-08-14T21:36:44.0441726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0441830Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0446411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0446518Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0446879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0446965Z self_outputs = self.self( 2025-08-14T21:36:44.0447380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0447506Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0447945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0448177Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0448191Z 2025-08-14T21:36:44.0448286Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0448420Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0449238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0449346Z layer_outputs = layer_module( 2025-08-14T21:36:44.0449629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0449729Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0450093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0450192Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0450546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0450641Z self_outputs = self.self( 2025-08-14T21:36:44.0450993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0451235Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0451669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0451861Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:36:44.0452223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:36:44.0452316Z hidden_states = hidden_states.view( 2025-08-14T21:36:44.0452329Z 2025-08-14T21:36:44.0452467Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0452913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0453002Z layer_outputs = layer_module( 2025-08-14T21:36:44.0453294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0453393Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0453755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0453848Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0454200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0454293Z self_outputs = self.self( 2025-08-14T21:36:44.0454646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0454771Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0455209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0455443Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0455527Z 2025-08-14T21:36:44.0455664Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0456112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0456203Z layer_outputs = layer_module( 2025-08-14T21:36:44.0456491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0456670Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0457080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0457176Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0457529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0457632Z self_outputs = self.self( 2025-08-14T21:36:44.0457984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0458113Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0458542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0458771Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0458784Z 2025-08-14T21:36:44.0458922Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0459369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0459531Z layer_outputs = layer_module( 2025-08-14T21:36:44.0459812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0459910Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0460273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0460371Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0460722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0460817Z self_outputs = self.self( 2025-08-14T21:36:44.0461222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:36:44.0461361Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:36:44.0461791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:36:44.0462018Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:36:44.0462030Z 2025-08-14T21:36:44.0462137Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0462235Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0462370Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0462814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0462900Z layer_outputs = layer_module( 2025-08-14T21:36:44.0463184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0463284Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0463681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0463782Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0464137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0464229Z self_outputs = self.self( 2025-08-14T21:36:44.0464581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:36:44.0464673Z attn_scores += diagonal_mask 2025-08-14T21:36:44.0464685Z 2025-08-14T21:36:44.0464820Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0465264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0465361Z layer_outputs = layer_module( 2025-08-14T21:36:44.0465640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0465735Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0466092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0466183Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0466533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0466625Z self_outputs = self.self( 2025-08-14T21:36:44.0466974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:36:44.0467077Z attn_probs = nn.functional.softmax( 2025-08-14T21:36:44.0467137Z 2025-08-14T21:36:44.0467234Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0467358Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0467811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0467900Z layer_outputs = layer_module( 2025-08-14T21:36:44.0468182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0468277Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0468626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0468724Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0469094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0469182Z self_outputs = self.self( 2025-08-14T21:36:44.0469543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0469689Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0470133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0470358Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:36:44.0470600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0470729Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0470742Z 2025-08-14T21:36:44.0470868Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0479706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0479864Z layer_outputs = layer_module( 2025-08-14T21:36:44.0480240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0480350Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0480847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0480947Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0481533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0481627Z self_outputs = self.self( 2025-08-14T21:36:44.0482108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0482266Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0482712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0482887Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:36:44.0483288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:36:44.0483400Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:36:44.0483647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:36:44.0483770Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:36:44.0483783Z 2025-08-14T21:36:44.0483917Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0484407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0484497Z layer_outputs = layer_module( 2025-08-14T21:36:44.0484786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0484882Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0485240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0485344Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0485762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0485861Z self_outputs = self.self( 2025-08-14T21:36:44.0486265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0486410Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0486861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0487048Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0487061Z 2025-08-14T21:36:44.0487196Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0487639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0487728Z layer_outputs = layer_module( 2025-08-14T21:36:44.0488012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0488112Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0488473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0488616Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0488967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0489059Z self_outputs = self.self( 2025-08-14T21:36:44.0489411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:36:44.0489554Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:36:44.0490006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:36:44.0490228Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:36:44.0490243Z 2025-08-14T21:36:44.0490380Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0490832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0490924Z layer_outputs = layer_module( 2025-08-14T21:36:44.0491210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0491307Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0491667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:36:44.0491765Z self_attn_outputs = self.attention( 2025-08-14T21:36:44.0492118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:36:44.0492256Z self_outputs = self.self( 2025-08-14T21:36:44.0492611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:36:44.0492853Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:36:44.0492865Z 2025-08-14T21:36:44.0492964Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0493061Z cudagraph partition due to non gpu ops 2025-08-14T21:36:44.0493198Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:36:44.0493644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:36:44.0493736Z layer_outputs = layer_module( 2025-08-14T21:36:44.0494024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:36:44.0494125Z return super().__call__(*args, **kwargs) 2025-08-14T21:36:44.0494541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:36:44.0494650Z layer_output = apply_chunking_to_forward( 2025-08-14T21:36:44.0494983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:36:44.0495086Z return forward_fn(*input_tensors) 2025-08-14T21:36:44.0495447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:36:44.0495591Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:36:44.0495948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:36:44.0496087Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:36:44.0496369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:36:44.0496457Z return self.act(input) 2025-08-14T21:36:44.0496521Z 2025-08-14T21:36:44.0496622Z cudagraph partition due to non gpu ops 2025-08-14T21:37:44.2261392Z cudagraph partition due to non gpu ops 2025-08-14T21:37:44.2261776Z cudagraph partition due to non gpu ops 2025-08-14T21:37:44.2262089Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:37:44.2262794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1723, in torch_dynamo_resume_in_forward_at_1703 2025-08-14T21:37:44.2263568Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:37:44.2263877Z 2025-08-14T21:37:46.3744200Z Compilation time (from dynamo_timed): 102.106569373 2025-08-14T21:37:46.4071157Z pass 2025-08-14T21:37:46.4072465Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:37:46.4082257Z TIMING: gc:0.0091 entire_frame_compile:102.10657 _recursive_pre_grad_passes:0.19401 _recursive_joint_graph_passes:1.45143 _recursive_post_grad_passes:1.37195 async_compile.wait:3.74095 code_gen:67.42756 inductor_compile:75.27625 backend_compile:94.67706 total_wall_time:102.10657 2025-08-14T21:37:46.4083408Z STATS: call_* op count: 1787 | FakeTensorMode.__torch_dispatch__:71813 | FakeTensor.__torch_dispatch__:9282 | ProxyTorchDispatchMode.__torch_dispatch__:18266 2025-08-14T21:37:46.4084039Z Dynamo produced 4 graphs covering 1787 ops with 4 graph breaks (1 unique) 2025-08-14T21:37:54.1672589Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:37:54.1674119Z from pkg_resources import resource_filename 2025-08-14T21:37:54.9100243Z 2025-08-14T21:37:59.3952447Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:37:59.3952850Z loading model: 0it [00:04, ?it/s] 2025-08-14T21:37:59.3973802Z cpu eval BartForCausalLM 2025-08-14T21:38:01.6523208Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:38:02.7073442Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:38:03.8010816Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:38:18.6606600Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6606988Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6607292Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6607544Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6607828Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6608077Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6608312Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6608566Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6608810Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6609064Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6609300Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6609545Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6609789Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6610027Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6610273Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6610516Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6610831Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6611123Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6611387Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6611687Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6612150Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6612937Z return mod(**inputs) 2025-08-14T21:38:18.6613444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6613945Z outputs = self.model.decoder( 2025-08-14T21:38:18.6614429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6614913Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6615404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6615860Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6616355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6616893Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6617400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6617917Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6618477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:38:18.6619085Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:38:18.6619325Z 2025-08-14T21:38:18.6619508Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6619961Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6620370Z return mod(**inputs) 2025-08-14T21:38:18.6620820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6621402Z outputs = self.model.decoder( 2025-08-14T21:38:18.6621885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6622362Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6622789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6623248Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6623742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6624254Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6624756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6631607Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6632180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:38:18.6632758Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:38:18.6633040Z 2025-08-14T21:38:18.6633142Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6633402Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6633688Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6634135Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6634564Z return mod(**inputs) 2025-08-14T21:38:18.6635019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6635499Z outputs = self.model.decoder( 2025-08-14T21:38:18.6635973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6636464Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6636907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6637425Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6637909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:38:18.6638444Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:38:18.6638930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:18.6639353Z return self.act(input) 2025-08-14T21:38:18.6639497Z 2025-08-14T21:38:18.6639663Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6639921Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6640229Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6640475Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6640722Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6640956Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6641293Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6641540Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6641810Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6642260Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6642668Z return mod(**inputs) 2025-08-14T21:38:18.6643114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6643586Z outputs = self.model.decoder( 2025-08-14T21:38:18.6644063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6644540Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6644970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6645488Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6645973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6646486Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6646988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6647498Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6648054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:38:18.6648985Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:38:18.6649252Z 2025-08-14T21:38:18.6649385Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6649838Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6650248Z return mod(**inputs) 2025-08-14T21:38:18.6650690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6651180Z outputs = self.model.decoder( 2025-08-14T21:38:18.6651651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6652133Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6652563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6653017Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6653500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6654005Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6658715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6659332Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6659895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:38:18.6660462Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:38:18.6660677Z 2025-08-14T21:38:18.6660778Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6661041Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6661332Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6661768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6662173Z return mod(**inputs) 2025-08-14T21:38:18.6662618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6663095Z outputs = self.model.decoder( 2025-08-14T21:38:18.6663566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6664041Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6664465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6664913Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6665386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:38:18.6665912Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:38:18.6666388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:18.6666811Z return self.act(input) 2025-08-14T21:38:18.6667038Z 2025-08-14T21:38:18.6667134Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6667385Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6667622Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6667870Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6668112Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6668349Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6668663Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6668914Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6669248Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6669699Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6670102Z return mod(**inputs) 2025-08-14T21:38:18.6670551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6671034Z outputs = self.model.decoder( 2025-08-14T21:38:18.6671509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6671989Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6672417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6672873Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6673349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6673860Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6674355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6674856Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6675404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:38:18.6676010Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:38:18.6676245Z 2025-08-14T21:38:18.6676483Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6676929Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6677341Z return mod(**inputs) 2025-08-14T21:38:18.6677787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6678280Z outputs = self.model.decoder( 2025-08-14T21:38:18.6678757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6679245Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6679676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6680136Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6680623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6681192Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6681688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6682189Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6682748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:38:18.6687558Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:38:18.6687829Z 2025-08-14T21:38:18.6687929Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6688185Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6688529Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6688966Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6689378Z return mod(**inputs) 2025-08-14T21:38:18.6689818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6690287Z outputs = self.model.decoder( 2025-08-14T21:38:18.6690752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6691224Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6691655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6692097Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6692576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:38:18.6693105Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:38:18.6693592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:18.6694006Z return self.act(input) 2025-08-14T21:38:18.6694156Z 2025-08-14T21:38:18.6694252Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6694502Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6694746Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6694997Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6695244Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6695478Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6695724Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6695966Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6696243Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6696681Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6697088Z return mod(**inputs) 2025-08-14T21:38:18.6697665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6698198Z outputs = self.model.decoder( 2025-08-14T21:38:18.6698668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6699147Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6699592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6700031Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6700511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6701023Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6701522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6702036Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6702596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:38:18.6703201Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:38:18.6703431Z 2025-08-14T21:38:18.6703563Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6704006Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6704410Z return mod(**inputs) 2025-08-14T21:38:18.6704851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6705320Z outputs = self.model.decoder( 2025-08-14T21:38:18.6705839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6706321Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6706747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6707192Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6707673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6708178Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6708670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6709175Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6709722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:38:18.6710289Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:38:18.6710500Z 2025-08-14T21:38:18.6710598Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6710854Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6711134Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6711565Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6711968Z return mod(**inputs) 2025-08-14T21:38:18.6716688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6717178Z outputs = self.model.decoder( 2025-08-14T21:38:18.6717642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6718126Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6718563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6719015Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6719553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:38:18.6720086Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:38:18.6720571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:18.6720994Z return self.act(input) 2025-08-14T21:38:18.6721214Z 2025-08-14T21:38:18.6721311Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6721566Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6721803Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6722046Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6722297Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6722531Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6722778Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6723022Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6723302Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6723740Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6724144Z return mod(**inputs) 2025-08-14T21:38:18.6724588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6725061Z outputs = self.model.decoder( 2025-08-14T21:38:18.6725528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6726024Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6726460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6727081Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6727569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6728089Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6728590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6729102Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6729655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:38:18.6730257Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:38:18.6730488Z 2025-08-14T21:38:18.6730616Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6731059Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6731462Z return mod(**inputs) 2025-08-14T21:38:18.6731903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6732375Z outputs = self.model.decoder( 2025-08-14T21:38:18.6732841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6733310Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6733731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6734176Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6734651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6735150Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6735641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6736143Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6736744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:38:18.6737313Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:38:18.6737514Z 2025-08-14T21:38:18.6737613Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6737869Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6738148Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6738581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6738986Z return mod(**inputs) 2025-08-14T21:38:18.6739434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6739914Z outputs = self.model.decoder( 2025-08-14T21:38:18.6740383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6740860Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6750049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6750691Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6751318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:38:18.6752039Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:38:18.6752594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:18.6753015Z return self.act(input) 2025-08-14T21:38:18.6753158Z 2025-08-14T21:38:18.6753252Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6753631Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6753880Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6754133Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6754388Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6754628Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6754883Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6755134Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6755426Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6757963Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6758363Z return mod(**inputs) 2025-08-14T21:38:18.6758807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6759275Z outputs = self.model.decoder( 2025-08-14T21:38:18.6759741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6760218Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6760651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6761185Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6761662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6762171Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6762666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6763168Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6763716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:38:18.6764318Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:38:18.6764549Z 2025-08-14T21:38:18.6764680Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6765204Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6765602Z return mod(**inputs) 2025-08-14T21:38:18.6766046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6766516Z outputs = self.model.decoder( 2025-08-14T21:38:18.6766980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6767454Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6767877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6768332Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6768811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6769314Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6769807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6770386Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6770995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:38:18.6771564Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:38:18.6771765Z 2025-08-14T21:38:18.6771863Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6772119Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6772402Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6772842Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6773301Z return mod(**inputs) 2025-08-14T21:38:18.6773760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6774244Z outputs = self.model.decoder( 2025-08-14T21:38:18.6774707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6775186Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6775617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6776054Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6776536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:38:18.6777075Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:38:18.6777562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:18.6777974Z return self.act(input) 2025-08-14T21:38:18.6778120Z 2025-08-14T21:38:18.6778220Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6778477Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6778717Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6778963Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6779212Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6779448Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6779691Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6779934Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6780213Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6780652Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6781052Z return mod(**inputs) 2025-08-14T21:38:18.6781499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6781969Z outputs = self.model.decoder( 2025-08-14T21:38:18.6782485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6782963Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6783395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6783838Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6784316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6789089Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6789584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6790097Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6790659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:38:18.6791304Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:38:18.6791534Z 2025-08-14T21:38:18.6791663Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6792103Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6792499Z return mod(**inputs) 2025-08-14T21:38:18.6792936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6793404Z outputs = self.model.decoder( 2025-08-14T21:38:18.6793868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6794400Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6794823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6795275Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6795756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6796266Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6796761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6797262Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6797811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:38:18.6798379Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:38:18.6798584Z 2025-08-14T21:38:18.6798682Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6798943Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6799389Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6799883Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6800285Z return mod(**inputs) 2025-08-14T21:38:18.6800737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6801299Z outputs = self.model.decoder( 2025-08-14T21:38:18.6801761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6802238Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6802673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6803121Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6803615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:38:18.6804204Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:38:18.6804694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:18.6805113Z return self.act(input) 2025-08-14T21:38:18.6805258Z 2025-08-14T21:38:18.6805353Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6805609Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6805846Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6806092Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6806331Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6806578Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6806811Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6807064Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6807346Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6807786Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6808191Z return mod(**inputs) 2025-08-14T21:38:18.6808632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6809103Z outputs = self.model.decoder( 2025-08-14T21:38:18.6809569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6810041Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6810473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6810911Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6811384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6811945Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6812448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6812944Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6813499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:38:18.6818381Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:38:18.6818612Z 2025-08-14T21:38:18.6818745Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6819186Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6819589Z return mod(**inputs) 2025-08-14T21:38:18.6820034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6820514Z outputs = self.model.decoder( 2025-08-14T21:38:18.6820990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6821470Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6821898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6834362Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6834931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6835469Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6835989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6836537Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6837103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:38:18.6837801Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:38:18.6838023Z 2025-08-14T21:38:18.6838129Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6838389Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6838666Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6839115Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6839533Z return mod(**inputs) 2025-08-14T21:38:18.6839988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6840469Z outputs = self.model.decoder( 2025-08-14T21:38:18.6840947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6841567Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6842013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6842466Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6847353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:38:18.6847897Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:38:18.6848379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:18.6849180Z return self.act(input) 2025-08-14T21:38:18.6849332Z 2025-08-14T21:38:18.6849432Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6849692Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6849941Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6850326Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6850581Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6850822Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6851077Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6851332Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6851610Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6852069Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6852477Z return mod(**inputs) 2025-08-14T21:38:18.6852926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6853406Z outputs = self.model.decoder( 2025-08-14T21:38:18.6853889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6854380Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6854813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6855274Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6855762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6856274Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6856773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6857368Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6857983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:38:18.6858590Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:38:18.6858835Z 2025-08-14T21:38:18.6858974Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6859430Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6859909Z return mod(**inputs) 2025-08-14T21:38:18.6860350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6860834Z outputs = self.model.decoder( 2025-08-14T21:38:18.6861304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6861788Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6862216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6862670Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6863154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6863676Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6864175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6864680Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6865232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:38:18.6865795Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:38:18.6866007Z 2025-08-14T21:38:18.6866108Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6866366Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6866659Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6867099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6867511Z return mod(**inputs) 2025-08-14T21:38:18.6868003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6868480Z outputs = self.model.decoder( 2025-08-14T21:38:18.6868956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6869439Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6869877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6870320Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6870805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:38:18.6871338Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:38:18.6880058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:18.6880614Z return self.act(input) 2025-08-14T21:38:18.6880788Z 2025-08-14T21:38:18.6880900Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6881275Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6881556Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6881843Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6882130Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6882406Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6882697Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6882988Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6883310Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6883762Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6884171Z return mod(**inputs) 2025-08-14T21:38:18.6884627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6885104Z outputs = self.model.decoder( 2025-08-14T21:38:18.6885637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6886186Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6886678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6887127Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6887620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6888138Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6888634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6889141Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6889705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:38:18.6890320Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:38:18.6890556Z 2025-08-14T21:38:18.6890693Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6891142Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6891554Z return mod(**inputs) 2025-08-14T21:38:18.6891993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6892471Z outputs = self.model.decoder( 2025-08-14T21:38:18.6892949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6893428Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6893860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6894363Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6894836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6895344Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6895840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6896342Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6896883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:38:18.6897444Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:38:18.6897651Z 2025-08-14T21:38:18.6897751Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6898015Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6898291Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6898734Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6899136Z return mod(**inputs) 2025-08-14T21:38:18.6899576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6900049Z outputs = self.model.decoder( 2025-08-14T21:38:18.6900511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6901109Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6901529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6901973Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6902450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:38:18.6902974Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:38:18.6903528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:18.6903957Z return self.act(input) 2025-08-14T21:38:18.6904092Z 2025-08-14T21:38:18.6904192Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6904434Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6904676Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6904922Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6905164Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6905442Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6905686Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6905921Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6906199Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6906650Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6907055Z return mod(**inputs) 2025-08-14T21:38:18.6907497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6907974Z outputs = self.model.decoder( 2025-08-14T21:38:18.6908436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6908906Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6909346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6909842Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6910323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6910887Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6911389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6911894Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6912447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:38:18.6913047Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:38:18.6913288Z 2025-08-14T21:38:18.6913416Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6913857Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6914261Z return mod(**inputs) 2025-08-14T21:38:18.6914712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6921426Z outputs = self.model.decoder( 2025-08-14T21:38:18.6921911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6922384Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6922824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6923278Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6923758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6924257Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6924762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6925260Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6925800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:38:18.6926372Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:38:18.6926583Z 2025-08-14T21:38:18.6926743Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6926995Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6927265Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6927727Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6928123Z return mod(**inputs) 2025-08-14T21:38:18.6928563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6929044Z outputs = self.model.decoder( 2025-08-14T21:38:18.6929514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6930109Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6930545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6930991Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6931457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:38:18.6931985Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:38:18.6932471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:18.6932897Z return self.act(input) 2025-08-14T21:38:18.6933033Z 2025-08-14T21:38:18.6933129Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6933382Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6933634Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6933877Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6934119Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6934409Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6934647Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6934887Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6935166Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6935609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6936004Z return mod(**inputs) 2025-08-14T21:38:18.6936448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6936922Z outputs = self.model.decoder( 2025-08-14T21:38:18.6937386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6937859Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6938290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6938744Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6939218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6939721Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6940223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6940729Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6941271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:38:18.6941873Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:38:18.6942106Z 2025-08-14T21:38:18.6942243Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6942675Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6943078Z return mod(**inputs) 2025-08-14T21:38:18.6943575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6948527Z outputs = self.model.decoder( 2025-08-14T21:38:18.6949306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6949784Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6950217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6950701Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6951307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:38:18.6951814Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:38:18.6952320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:38:18.6952819Z attn_output, attn_weights = attention_interface( 2025-08-14T21:38:18.6953379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:38:18.6953948Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:38:18.6954149Z 2025-08-14T21:38:18.6954256Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6954500Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6954781Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6955225Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6955616Z return mod(**inputs) 2025-08-14T21:38:18.6956062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:38:18.6956669Z outputs = self.model.decoder( 2025-08-14T21:38:18.6957135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:38:18.6957604Z layer_outputs = decoder_layer( 2025-08-14T21:38:18.6958034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:18.6958487Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:18.6959085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:38:18.6959617Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:38:18.6960097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:18.6960514Z return self.act(input) 2025-08-14T21:38:18.6960649Z 2025-08-14T21:38:18.6960747Z cudagraph partition due to non gpu ops 2025-08-14T21:38:18.6961106Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6961568Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6961968Z return mod(**inputs) 2025-08-14T21:38:18.6962402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1917, in forward 2025-08-14T21:38:18.6962888Z logits = self.lm_head(outputs[0]) 2025-08-14T21:38:18.6963052Z 2025-08-14T21:38:18.6963186Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:18.6963623Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:38:18.6964020Z return mod(**inputs) 2025-08-14T21:38:18.6964466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1923, in forward 2025-08-14T21:38:18.6965037Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:38:18.6965286Z 2025-08-14T21:38:27.0374845Z Compilation time (from dynamo_timed): 20.514889284 2025-08-14T21:38:27.0663453Z pass 2025-08-14T21:38:27.0664259Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:38:27.0665263Z TIMING: _recursive_pre_grad_passes:0.05515 _recursive_joint_graph_passes:0.81186 _recursive_post_grad_passes:0.11558 async_compile.wait:1.04263 code_gen:7.06555 inductor_compile:10.93946 backend_compile:17.34344 gc:0.00269 entire_frame_compile:20.51489 total_wall_time:20.51489 2025-08-14T21:38:27.0666486Z STATS: call_* op count: 372 | FakeTensorMode.__torch_dispatch__:24843 | FakeTensor.__torch_dispatch__:3951 | ProxyTorchDispatchMode.__torch_dispatch__:5633 2025-08-14T21:38:27.0667208Z Dynamo produced 1 graphs covering 372 ops with 0 graph breaks (0 unique) 2025-08-14T21:38:34.1162218Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:38:34.1163336Z from pkg_resources import resource_filename 2025-08-14T21:38:35.0951199Z 2025-08-14T21:38:43.2888059Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:38:43.2888414Z loading model: 0it [00:08, ?it/s] 2025-08-14T21:38:43.2919820Z cpu eval BartForConditionalGeneration 2025-08-14T21:38:48.1451927Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:38:50.4561739Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:38:52.8080875Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:39:24.6232392Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6233113Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6233375Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6233613Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6233884Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6234131Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6234427Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6234665Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6234914Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6235150Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6235391Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6235631Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6235870Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6236115Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6236374Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6236627Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6236875Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6237118Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6237360Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6237648Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6238125Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6238559Z return mod(**inputs) 2025-08-14T21:39:24.6239049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6239541Z outputs = self.model( 2025-08-14T21:39:24.6240003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6240498Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6240972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6241548Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6241991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6242585Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6243067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6243582Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6244090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6244604Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6249688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6250309Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6250555Z 2025-08-14T21:39:24.6250748Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6251210Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6251617Z return mod(**inputs) 2025-08-14T21:39:24.6252058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6252534Z outputs = self.model( 2025-08-14T21:39:24.6252980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6253455Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6253917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6254392Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6254828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6255391Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6255877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6256374Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6256870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6257368Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6257934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6258535Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6258746Z 2025-08-14T21:39:24.6258868Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6259137Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6259461Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6259986Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6260395Z return mod(**inputs) 2025-08-14T21:39:24.6260834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6261306Z outputs = self.model( 2025-08-14T21:39:24.6261749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6262218Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6262673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6263142Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6263624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6264077Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6264659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:39:24.6265192Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6265677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6266099Z return self.act(input) 2025-08-14T21:39:24.6266244Z 2025-08-14T21:39:24.6266343Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6266598Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6266836Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6267080Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6267326Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6267568Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6267812Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6268062Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6268341Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6268788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6269190Z return mod(**inputs) 2025-08-14T21:39:24.6269638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6270109Z outputs = self.model( 2025-08-14T21:39:24.6270557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6271041Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6271519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6271989Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6272477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6272929Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6273409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6273969Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6278654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6279158Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6279708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6280310Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6280568Z 2025-08-14T21:39:24.6280701Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6281238Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6281636Z return mod(**inputs) 2025-08-14T21:39:24.6282094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6282573Z outputs = self.model( 2025-08-14T21:39:24.6283015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6283495Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6283964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6284451Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6284881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6285341Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6285819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6286449Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6286941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6287446Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6288002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6288689Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6288901Z 2025-08-14T21:39:24.6289003Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6289257Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6289541Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6289982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6290387Z return mod(**inputs) 2025-08-14T21:39:24.6290842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6291305Z outputs = self.model( 2025-08-14T21:39:24.6291748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6292233Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6292760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6293229Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6293661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6294112Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6294656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:39:24.6295194Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6295692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6296115Z return self.act(input) 2025-08-14T21:39:24.6296255Z 2025-08-14T21:39:24.6296354Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6296613Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6296875Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6297118Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6297378Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6297632Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6297875Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6298111Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6298406Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6298864Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6299268Z return mod(**inputs) 2025-08-14T21:39:24.6299723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6300205Z outputs = self.model( 2025-08-14T21:39:24.6300655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6301125Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6301599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6302075Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6302495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6303010Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6307830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6308332Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6308820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6309326Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6309888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6310489Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6310737Z 2025-08-14T21:39:24.6310867Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6311308Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6311720Z return mod(**inputs) 2025-08-14T21:39:24.6312161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6312632Z outputs = self.model( 2025-08-14T21:39:24.6313074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6313547Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6314004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6314475Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6314909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6315356Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6315832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6316383Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6316884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6317434Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6318064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6318640Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6318843Z 2025-08-14T21:39:24.6318954Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6319207Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6319492Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6319940Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6320346Z return mod(**inputs) 2025-08-14T21:39:24.6320791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6321371Z outputs = self.model( 2025-08-14T21:39:24.6321828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6322297Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6322769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6323255Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6323681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6324137Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6324640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:39:24.6325184Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6325774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6326204Z return self.act(input) 2025-08-14T21:39:24.6326341Z 2025-08-14T21:39:24.6326449Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6326704Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6326955Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6327202Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6327452Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6327687Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6327934Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6328181Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6328460Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6328910Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6329305Z return mod(**inputs) 2025-08-14T21:39:24.6329744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6330213Z outputs = self.model( 2025-08-14T21:39:24.6330660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6331138Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6331599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6340509Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6341074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6341667Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6342337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6342840Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6343343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6343837Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6344392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6344995Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6345229Z 2025-08-14T21:39:24.6345369Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6345814Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6346269Z return mod(**inputs) 2025-08-14T21:39:24.6349180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6349649Z outputs = self.model( 2025-08-14T21:39:24.6350107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6350583Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6351052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6351522Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6351959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6352419Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6352918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6353415Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6353904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6354534Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6355084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6355658Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6355869Z 2025-08-14T21:39:24.6355970Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6356227Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6356503Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6356951Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6357363Z return mod(**inputs) 2025-08-14T21:39:24.6357935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6358407Z outputs = self.model( 2025-08-14T21:39:24.6358851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6359327Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6359787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6360259Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6360731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6361325Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6361792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:39:24.6362323Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6362894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6363309Z return self.act(input) 2025-08-14T21:39:24.6363452Z 2025-08-14T21:39:24.6363551Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6363806Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6364055Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6364296Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6364546Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6364796Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6365038Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6365280Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6365562Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6365999Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6366405Z return mod(**inputs) 2025-08-14T21:39:24.6366857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6367394Z outputs = self.model( 2025-08-14T21:39:24.6367831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6368311Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6368776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6369248Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6369694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6370146Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6370629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6371127Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6371677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6372180Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6372725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6373327Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6373564Z 2025-08-14T21:39:24.6374013Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6374456Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6374850Z return mod(**inputs) 2025-08-14T21:39:24.6375352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6380003Z outputs = self.model( 2025-08-14T21:39:24.6380453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6380926Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6381397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6381874Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6382293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6382742Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6383218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6383716Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6384272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6384773Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6385326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6385895Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6386103Z 2025-08-14T21:39:24.6386201Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6386456Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6386734Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6387170Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6387579Z return mod(**inputs) 2025-08-14T21:39:24.6388022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6388493Z outputs = self.model( 2025-08-14T21:39:24.6388931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6389413Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6389937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6390479Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6390913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6391367Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6391847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:39:24.6392369Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6392862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6393298Z return self.act(input) 2025-08-14T21:39:24.6393437Z 2025-08-14T21:39:24.6393595Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6393896Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6394147Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6394392Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6394635Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6394878Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6395125Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6395361Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6395650Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6396115Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6396523Z return mod(**inputs) 2025-08-14T21:39:24.6396981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6397457Z outputs = self.model( 2025-08-14T21:39:24.6397904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6398372Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6398834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6399305Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6399727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6400174Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6400650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6401225Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6401768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6402280Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6402834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6403433Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6403664Z 2025-08-14T21:39:24.6403792Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6404291Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6408934Z return mod(**inputs) 2025-08-14T21:39:24.6409374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6409848Z outputs = self.model( 2025-08-14T21:39:24.6410297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6410781Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6411244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6411731Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6412162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6412617Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6413089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6413581Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6414077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6414578Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6415194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6415772Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6415976Z 2025-08-14T21:39:24.6416083Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6416338Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6416621Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6417064Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6417459Z return mod(**inputs) 2025-08-14T21:39:24.6417905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6418379Z outputs = self.model( 2025-08-14T21:39:24.6418878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6419423Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6419892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6420370Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6420798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6421258Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6421735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:39:24.6422265Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6422739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6423262Z return self.act(input) 2025-08-14T21:39:24.6423404Z 2025-08-14T21:39:24.6423516Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6423778Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6424024Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6424279Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6424537Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6424774Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6425023Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6425284Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6425557Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6426005Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6426410Z return mod(**inputs) 2025-08-14T21:39:24.6426851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6427324Z outputs = self.model( 2025-08-14T21:39:24.6427766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6428245Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6447311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6447920Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6448495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6449328Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6449818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6450335Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6450844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6451389Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6452334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6452972Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6453217Z 2025-08-14T21:39:24.6453368Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6453822Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6454241Z return mod(**inputs) 2025-08-14T21:39:24.6454701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6455190Z outputs = self.model( 2025-08-14T21:39:24.6455639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6456138Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6456627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6457098Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6457534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6457993Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6458477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6458976Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6459474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6459982Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6460628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6461202Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6461421Z 2025-08-14T21:39:24.6461525Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6461789Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6462072Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6470797Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6471339Z return mod(**inputs) 2025-08-14T21:39:24.6471920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6472545Z outputs = self.model( 2025-08-14T21:39:24.6473135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6473656Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6474124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6474607Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6475048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6475503Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6475975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:39:24.6476511Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6477123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6477545Z return self.act(input) 2025-08-14T21:39:24.6477695Z 2025-08-14T21:39:24.6477798Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6478058Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6478318Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6478617Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6478868Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6479115Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6479355Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6479600Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6479885Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6480328Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6480783Z return mod(**inputs) 2025-08-14T21:39:24.6481324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6481804Z outputs = self.model( 2025-08-14T21:39:24.6482246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6482732Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6483210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6483676Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6484114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6484567Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6485108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6485594Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6486091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6486680Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6487240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6487839Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6488083Z 2025-08-14T21:39:24.6488216Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6488667Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6489067Z return mod(**inputs) 2025-08-14T21:39:24.6489516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6489994Z outputs = self.model( 2025-08-14T21:39:24.6490447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6490921Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6491450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6492008Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6492440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6492889Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6493369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6493867Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6494362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6494866Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6495420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6495999Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6496208Z 2025-08-14T21:39:24.6496354Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6496622Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6496909Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6497351Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6497756Z return mod(**inputs) 2025-08-14T21:39:24.6498207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6498681Z outputs = self.model( 2025-08-14T21:39:24.6499137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6499610Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6500086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6500563Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6500999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6501441Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6501923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:39:24.6502457Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6502937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6503352Z return self.act(input) 2025-08-14T21:39:24.6503495Z 2025-08-14T21:39:24.6503590Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6503842Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6504132Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6504379Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6504622Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6504865Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6505106Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6505352Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6505637Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6512532Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6512938Z return mod(**inputs) 2025-08-14T21:39:24.6513384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6513848Z outputs = self.model( 2025-08-14T21:39:24.6514295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6514776Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6515243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6515711Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6516143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6516592Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6517060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6517562Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6518058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6518561Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6519111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6519718Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6520007Z 2025-08-14T21:39:24.6520162Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6520709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6521164Z return mod(**inputs) 2025-08-14T21:39:24.6521614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6522086Z outputs = self.model( 2025-08-14T21:39:24.6522521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6522995Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6523459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6523935Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6524365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6524811Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6525284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6525774Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6526268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6526818Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6527371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6527986Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6528204Z 2025-08-14T21:39:24.6528308Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6528568Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6528847Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6529292Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6529694Z return mod(**inputs) 2025-08-14T21:39:24.6530135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6530604Z outputs = self.model( 2025-08-14T21:39:24.6531046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6531526Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6531989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6532454Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6532890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6533333Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6533798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:39:24.6534349Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6534892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6539546Z return self.act(input) 2025-08-14T21:39:24.6539690Z 2025-08-14T21:39:24.6539786Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6540035Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6540279Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6540534Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6540782Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6541014Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6541323Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6541566Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6541841Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6542279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6542679Z return mod(**inputs) 2025-08-14T21:39:24.6543118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6543580Z outputs = self.model( 2025-08-14T21:39:24.6544023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6544492Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6544969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6545438Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6545872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6546317Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6546783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6547278Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6547768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6548270Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6549267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6550092Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6550325Z 2025-08-14T21:39:24.6550463Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6550916Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6551315Z return mod(**inputs) 2025-08-14T21:39:24.6551761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6552241Z outputs = self.model( 2025-08-14T21:39:24.6552681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6553157Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6553679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6554153Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6554583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6555033Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6555504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6555993Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6556483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6556982Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6557534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6558093Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6558303Z 2025-08-14T21:39:24.6558409Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6558665Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6558954Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6559500Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6559906Z return mod(**inputs) 2025-08-14T21:39:24.6560348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6560811Z outputs = self.model( 2025-08-14T21:39:24.6561328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6561806Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6562270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6562738Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6563178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6563642Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6568323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:39:24.6568855Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6569337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6569754Z return self.act(input) 2025-08-14T21:39:24.6569890Z 2025-08-14T21:39:24.6569986Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6570239Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6570486Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6570721Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6570966Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6571279Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6571512Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6571757Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6572036Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6572484Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6572877Z return mod(**inputs) 2025-08-14T21:39:24.6573321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6573797Z outputs = self.model( 2025-08-14T21:39:24.6574239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6574719Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6575185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6575660Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6576086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6576534Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6577008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6577497Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6577985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6578611Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6579166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6579761Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6580003Z 2025-08-14T21:39:24.6580136Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6580636Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6581036Z return mod(**inputs) 2025-08-14T21:39:24.6581473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6581947Z outputs = self.model( 2025-08-14T21:39:24.6582440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6582915Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6583397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6583872Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6584301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6584749Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6585226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6585723Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6586204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6586702Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6587257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6587826Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6588027Z 2025-08-14T21:39:24.6588123Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6588377Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6588739Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6589182Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6589577Z return mod(**inputs) 2025-08-14T21:39:24.6590025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6590494Z outputs = self.model( 2025-08-14T21:39:24.6590931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6591406Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6591872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6592342Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6592824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6597530Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6598017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:39:24.6598539Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6599033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6599464Z return self.act(input) 2025-08-14T21:39:24.6599604Z 2025-08-14T21:39:24.6599708Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6599952Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6600202Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6600450Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6600689Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6600934Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6601280Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6601519Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6601799Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6602305Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6602712Z return mod(**inputs) 2025-08-14T21:39:24.6603150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6603622Z outputs = self.model( 2025-08-14T21:39:24.6604069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6604538Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6605005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6605482Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6605921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6606370Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6606856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6607406Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6607974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6608470Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6609021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6609623Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6609853Z 2025-08-14T21:39:24.6609981Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6610476Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6610887Z return mod(**inputs) 2025-08-14T21:39:24.6611392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6611858Z outputs = self.model( 2025-08-14T21:39:24.6612298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6612769Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6613226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6613695Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6614118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6614570Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6615038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:39:24.6615541Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:39:24.6616029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6616527Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6617070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6617634Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6617836Z 2025-08-14T21:39:24.6617939Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6618185Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6618469Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6618912Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6619318Z return mod(**inputs) 2025-08-14T21:39:24.6619812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6620284Z outputs = self.model( 2025-08-14T21:39:24.6620730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:39:24.6621196Z encoder_outputs = self.encoder( 2025-08-14T21:39:24.6621688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:39:24.6630607Z layer_outputs = encoder_layer( 2025-08-14T21:39:24.6631168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6631750Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6632392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:39:24.6633095Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6633685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6634109Z return self.act(input) 2025-08-14T21:39:24.6634257Z 2025-08-14T21:39:24.6634354Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6634610Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6634849Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6635096Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6635343Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6635578Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6635829Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6636089Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6636460Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6639062Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6639473Z return mod(**inputs) 2025-08-14T21:39:24.6639927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6640430Z outputs = self.model( 2025-08-14T21:39:24.6640882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6641471Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6641929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6642404Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6642833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6643288Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6643761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6644270Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6644816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6645315Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6645860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6646457Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6646691Z 2025-08-14T21:39:24.6646827Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6647277Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6647674Z return mod(**inputs) 2025-08-14T21:39:24.6648185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6648659Z outputs = self.model( 2025-08-14T21:39:24.6649517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6649998Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6650466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6650990Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6651489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6651939Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6652424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6652983Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6653486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6653992Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6654548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6655123Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6655333Z 2025-08-14T21:39:24.6655431Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6655691Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6655932Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6656185Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6656429Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6656808Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6657044Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6657289Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6657580Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6658021Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6658423Z return mod(**inputs) 2025-08-14T21:39:24.6658878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6659345Z outputs = self.model( 2025-08-14T21:39:24.6659797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6660270Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6660734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6661205Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6661640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6662095Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6662572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6663076Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6663585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6664082Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6664629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6665276Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6669697Z 2025-08-14T21:39:24.6669837Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6670360Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6670757Z return mod(**inputs) 2025-08-14T21:39:24.6671201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6671677Z outputs = self.model( 2025-08-14T21:39:24.6672121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6672591Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6673067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6673544Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6673974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6674425Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6674910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6675429Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6675934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6676439Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6676992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6677554Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6677767Z 2025-08-14T21:39:24.6677864Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6678168Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6678453Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6678892Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6679303Z return mod(**inputs) 2025-08-14T21:39:24.6679801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6680346Z outputs = self.model( 2025-08-14T21:39:24.6680793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6681349Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6681818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6682292Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6682722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6683183Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6683668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:24.6684252Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6684738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6685166Z return self.act(input) 2025-08-14T21:39:24.6685303Z 2025-08-14T21:39:24.6685398Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6685648Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6685899Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6686136Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6686379Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6686618Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6686865Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6687100Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6687383Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6687884Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6688279Z return mod(**inputs) 2025-08-14T21:39:24.6688718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6689193Z outputs = self.model( 2025-08-14T21:39:24.6689637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6690105Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6690574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6691059Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6691481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6691935Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6692418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6692924Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6693419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6693923Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6698776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6699385Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6699670Z 2025-08-14T21:39:24.6699803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6700256Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6700660Z return mod(**inputs) 2025-08-14T21:39:24.6701108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6701581Z outputs = self.model( 2025-08-14T21:39:24.6702025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6702501Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6702961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6703439Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6703877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6704328Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6704814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6705328Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6705826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6706321Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6706875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6707443Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6707644Z 2025-08-14T21:39:24.6707749Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6708002Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6708255Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6708499Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6708781Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6709150Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6709398Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6709636Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6709914Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6710363Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6710767Z return mod(**inputs) 2025-08-14T21:39:24.6711209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6711686Z outputs = self.model( 2025-08-14T21:39:24.6712132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6712611Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6713132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6713609Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6714044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6714488Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6714964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6715481Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6715985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6716491Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6717051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6717713Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6717946Z 2025-08-14T21:39:24.6718080Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6718526Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6718927Z return mod(**inputs) 2025-08-14T21:39:24.6719369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6719832Z outputs = self.model( 2025-08-14T21:39:24.6720278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6720750Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6721285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6721768Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6722203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6722656Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6723135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6727924Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6728435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6728932Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6729488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6730063Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6730271Z 2025-08-14T21:39:24.6730377Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6730629Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6730969Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6731419Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6731821Z return mod(**inputs) 2025-08-14T21:39:24.6732261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6732736Z outputs = self.model( 2025-08-14T21:39:24.6733182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6733660Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6734132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6734628Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6735062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6735508Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6735985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:24.6736523Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6737000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6737429Z return self.act(input) 2025-08-14T21:39:24.6737575Z 2025-08-14T21:39:24.6737719Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6737976Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6738296Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6738537Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6738829Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6739063Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6739303Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6739546Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6739817Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6740264Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6740674Z return mod(**inputs) 2025-08-14T21:39:24.6741118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6741583Z outputs = self.model( 2025-08-14T21:39:24.6742077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6742553Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6743013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6743492Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6743925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6744376Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6744847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6745355Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6745863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6746367Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6746920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6747535Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6747769Z 2025-08-14T21:39:24.6747907Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6748391Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6749219Z return mod(**inputs) 2025-08-14T21:39:24.6749672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6750143Z outputs = self.model( 2025-08-14T21:39:24.6750579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6751056Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6751525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6751995Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6752484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6757106Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6757584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6758084Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6758588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6759093Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6759653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6760229Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6760448Z 2025-08-14T21:39:24.6760685Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6760947Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6761261Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6761515Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6761761Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6761995Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6762239Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6762481Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6762774Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6763215Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6763621Z return mod(**inputs) 2025-08-14T21:39:24.6764063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6764525Z outputs = self.model( 2025-08-14T21:39:24.6764977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6765449Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6765915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6766385Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6766868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6767397Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6767867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6768381Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6768893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6769397Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6770013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6770612Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6770896Z 2025-08-14T21:39:24.6771026Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6771472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6771867Z return mod(**inputs) 2025-08-14T21:39:24.6772315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6772787Z outputs = self.model( 2025-08-14T21:39:24.6773228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6773717Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6774185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6774671Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6775151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6775605Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6776083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6776591Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6777108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6777608Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6778163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6778773Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6778984Z 2025-08-14T21:39:24.6779089Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6779350Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6779636Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6780088Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6780493Z return mod(**inputs) 2025-08-14T21:39:24.6780942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6781463Z outputs = self.model( 2025-08-14T21:39:24.6790444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6791067Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6791688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6792320Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6792862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6793317Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6793790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:24.6794322Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6794811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6795234Z return self.act(input) 2025-08-14T21:39:24.6795370Z 2025-08-14T21:39:24.6795467Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6795767Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6796015Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6798222Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6798315Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6798478Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6798570Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6798667Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6798794Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6799047Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6799135Z return mod(**inputs) 2025-08-14T21:39:24.6799454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6799538Z outputs = self.model( 2025-08-14T21:39:24.6799862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6799958Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6800279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6800374Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6800654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6800767Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6801177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6801308Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6801632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6801754Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6802195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6802362Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6802375Z 2025-08-14T21:39:24.6802504Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6802763Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6802849Z return mod(**inputs) 2025-08-14T21:39:24.6803175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6803259Z outputs = self.model( 2025-08-14T21:39:24.6803575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6803673Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6803985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6804079Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6804371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6804472Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6804790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6804915Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6805225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6805357Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6805725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6805871Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6805883Z 2025-08-14T21:39:24.6805981Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6806120Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6806226Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6806318Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6806410Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6806509Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6806601Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6806692Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6806826Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6807076Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6807165Z return mod(**inputs) 2025-08-14T21:39:24.6807478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6807565Z outputs = self.model( 2025-08-14T21:39:24.6807895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6807988Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6808300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6808395Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6808676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6808785Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6809097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6809229Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6809592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6809714Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6810091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6810307Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6810320Z 2025-08-14T21:39:24.6810449Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6810784Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6810870Z return mod(**inputs) 2025-08-14T21:39:24.6811184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6811277Z outputs = self.model( 2025-08-14T21:39:24.6811594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6811691Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6812007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6812099Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6812389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6812487Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6812803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6812932Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6813242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6813372Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6813785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6813917Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6813930Z 2025-08-14T21:39:24.6814035Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6814130Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6814262Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6814511Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6814592Z return mod(**inputs) 2025-08-14T21:39:24.6814910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6814992Z outputs = self.model( 2025-08-14T21:39:24.6815306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6815405Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6815719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6815815Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6816093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6816191Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6816560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:24.6816708Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6816982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6817068Z return self.act(input) 2025-08-14T21:39:24.6817126Z 2025-08-14T21:39:24.6817221Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6817320Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6817415Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6817506Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6817605Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6817700Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6817795Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6817895Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6818021Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6818275Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6818356Z return mod(**inputs) 2025-08-14T21:39:24.6818671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6818765Z outputs = self.model( 2025-08-14T21:39:24.6819079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6819173Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6819500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6819590Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6837039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6837249Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6837670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6837804Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6838139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6838299Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6838810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6838997Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6839012Z 2025-08-14T21:39:24.6839169Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6839486Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6839703Z return mod(**inputs) 2025-08-14T21:39:24.6840036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6840135Z outputs = self.model( 2025-08-14T21:39:24.6840452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6840554Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6840883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6840981Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6841358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6841472Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6841788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6841923Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6842236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6842360Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6842805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6842945Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6842958Z 2025-08-14T21:39:24.6843076Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6843177Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6843272Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6843379Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6843529Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6843625Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6843727Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6843819Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6843951Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6844218Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6844306Z return mod(**inputs) 2025-08-14T21:39:24.6844633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6844728Z outputs = self.model( 2025-08-14T21:39:24.6845039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6845141Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6845453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6845545Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6845839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6845945Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6846268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6846407Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6846778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6846914Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6847283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6847457Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6847470Z 2025-08-14T21:39:24.6847600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6847854Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6847951Z return mod(**inputs) 2025-08-14T21:39:24.6848267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6848355Z outputs = self.model( 2025-08-14T21:39:24.6849099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6849226Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6849556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6849647Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6849928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6850041Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6850354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6850506Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6850963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6851083Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6851468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6851604Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6851617Z 2025-08-14T21:39:24.6851715Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6851826Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6851960Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6852224Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6852310Z return mod(**inputs) 2025-08-14T21:39:24.6852624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6852731Z outputs = self.model( 2025-08-14T21:39:24.6853055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6853149Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6853481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6853574Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6853920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6858132Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6858451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:24.6858611Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6858884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6858983Z return self.act(input) 2025-08-14T21:39:24.6858996Z 2025-08-14T21:39:24.6859177Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6859274Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6859379Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6859471Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6859567Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6859669Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6859760Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6859850Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6859992Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6860241Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6860330Z return mod(**inputs) 2025-08-14T21:39:24.6860655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6860740Z outputs = self.model( 2025-08-14T21:39:24.6861063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6861158Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6861465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6861562Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6861840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6861951Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6862264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6862435Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6862752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6862878Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6863256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6863421Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6863434Z 2025-08-14T21:39:24.6863563Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6863822Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6863906Z return mod(**inputs) 2025-08-14T21:39:24.6864219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6864317Z outputs = self.model( 2025-08-14T21:39:24.6864627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6864728Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6865038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6865131Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6865422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6865521Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6865829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6865958Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6866265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6866396Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6866838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6866977Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6866990Z 2025-08-14T21:39:24.6867100Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6867198Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6867304Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6867397Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6867488Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6867588Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6867679Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6867771Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6867907Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6868189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6868290Z return mod(**inputs) 2025-08-14T21:39:24.6868704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6868791Z outputs = self.model( 2025-08-14T21:39:24.6869114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6869209Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6869523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6869623Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6869904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6870088Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6870412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6870551Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6870870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6870989Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6871357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6871532Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6871546Z 2025-08-14T21:39:24.6871674Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6871932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6872019Z return mod(**inputs) 2025-08-14T21:39:24.6872393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6872495Z outputs = self.model( 2025-08-14T21:39:24.6872809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6872903Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6873221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6873310Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6873601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6873700Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6874012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6874156Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6874508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6874635Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6874999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6875133Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6875147Z 2025-08-14T21:39:24.6875255Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6875352Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6875479Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6875737Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6875825Z return mod(**inputs) 2025-08-14T21:39:24.6876149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6876234Z outputs = self.model( 2025-08-14T21:39:24.6876552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6876650Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6876958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6877047Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6877332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6877432Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6877746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:24.6877946Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6878217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6878305Z return self.act(input) 2025-08-14T21:39:24.6878317Z 2025-08-14T21:39:24.6878423Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6878517Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6878615Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6878707Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6878798Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6878896Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6878986Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6879075Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6879209Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6879458Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6879542Z return mod(**inputs) 2025-08-14T21:39:24.6879866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6879949Z outputs = self.model( 2025-08-14T21:39:24.6880267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6880356Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6880666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6880760Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6881036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6881217Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6881535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6881660Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6882043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6882161Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6882528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6882745Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6882759Z 2025-08-14T21:39:24.6882892Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6887374Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6887458Z return mod(**inputs) 2025-08-14T21:39:24.6887775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6887866Z outputs = self.model( 2025-08-14T21:39:24.6888176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6888268Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6888586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6888674Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6888960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6889060Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6889377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6889506Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6889865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6889993Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6890358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6890492Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6890505Z 2025-08-14T21:39:24.6890605Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6890698Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6890790Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6890886Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6890975Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6891073Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6891164Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6891258Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6891396Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6891647Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6891730Z return mod(**inputs) 2025-08-14T21:39:24.6892049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6892133Z outputs = self.model( 2025-08-14T21:39:24.6892444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6892540Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6892852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6892948Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6893228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6893328Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6893690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6893823Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6894139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6894256Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6894621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6894787Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6894799Z 2025-08-14T21:39:24.6894929Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6895189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6895272Z return mod(**inputs) 2025-08-14T21:39:24.6895585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6895676Z outputs = self.model( 2025-08-14T21:39:24.6895985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6896078Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6896397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6896484Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6896770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6896870Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6897269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6897412Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6897798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6897917Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6898287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6898415Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6898428Z 2025-08-14T21:39:24.6898531Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6898626Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6898753Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6899013Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6899093Z return mod(**inputs) 2025-08-14T21:39:24.6899413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6899506Z outputs = self.model( 2025-08-14T21:39:24.6899825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6899922Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6900233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6900321Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6900606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6900703Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6901031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:24.6901224Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6901549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6901642Z return self.act(input) 2025-08-14T21:39:24.6901657Z 2025-08-14T21:39:24.6901752Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6901844Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6901942Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6902032Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6902129Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6902223Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6902313Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6902412Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6902540Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6902788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6902882Z return mod(**inputs) 2025-08-14T21:39:24.6903196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6903279Z outputs = self.model( 2025-08-14T21:39:24.6903595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6903683Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6904005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6904097Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6904372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6904523Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6904839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6904968Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6905275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6905394Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6905762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6905924Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6905936Z 2025-08-14T21:39:24.6906079Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6906336Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6906427Z return mod(**inputs) 2025-08-14T21:39:24.6906744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6906833Z outputs = self.model( 2025-08-14T21:39:24.6907144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6907233Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6907552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6907644Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6907927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6908024Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6908332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6908466Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6908824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6908951Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6909316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6909445Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6909457Z 2025-08-14T21:39:24.6909561Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6909654Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6909748Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6909847Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6909937Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6910043Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6910139Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6910229Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6910367Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6910614Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6910693Z return mod(**inputs) 2025-08-14T21:39:24.6911014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6911095Z outputs = self.model( 2025-08-14T21:39:24.6911411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6911508Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6911878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6920435Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6920820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6920929Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6921431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6921587Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6922008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6922141Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6922639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6922835Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6922856Z 2025-08-14T21:39:24.6922998Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6923332Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6923423Z return mod(**inputs) 2025-08-14T21:39:24.6923848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6923942Z outputs = self.model( 2025-08-14T21:39:24.6924367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6924463Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6924896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6924990Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6925284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6925384Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6925749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6925891Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6926243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6926361Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6926810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6926940Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6926953Z 2025-08-14T21:39:24.6927060Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6927159Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6927285Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6927550Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6927632Z return mod(**inputs) 2025-08-14T21:39:24.6927944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6928034Z outputs = self.model( 2025-08-14T21:39:24.6928346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6928442Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6928753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6928841Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6929126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6929272Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6929596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:24.6929743Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6930017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6930110Z return self.act(input) 2025-08-14T21:39:24.6930122Z 2025-08-14T21:39:24.6930218Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6930338Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6930451Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6930548Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6930642Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6930740Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6930834Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6930931Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6931056Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6931311Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6931400Z return mod(**inputs) 2025-08-14T21:39:24.6931714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6931801Z outputs = self.model( 2025-08-14T21:39:24.6932120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6932215Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6932532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6932624Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6932901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6933055Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6933369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6933493Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6933810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6933929Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6934303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6934483Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6934505Z 2025-08-14T21:39:24.6934644Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6934906Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6934990Z return mod(**inputs) 2025-08-14T21:39:24.6935311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6935393Z outputs = self.model( 2025-08-14T21:39:24.6935704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6935802Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6936110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6936203Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6936485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6936629Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6936953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6937072Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6937380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6937509Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6937872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6938013Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6938025Z 2025-08-14T21:39:24.6938119Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6938211Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6938316Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6938409Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6938499Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6938597Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6938692Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6938787Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6938923Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6939173Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6939259Z return mod(**inputs) 2025-08-14T21:39:24.6939572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6939654Z outputs = self.model( 2025-08-14T21:39:24.6939974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6940067Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6940384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6940514Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6940850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6940956Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6941347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6941478Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6941794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6941910Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6942281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6942442Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6942455Z 2025-08-14T21:39:24.6942585Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6942841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6942921Z return mod(**inputs) 2025-08-14T21:39:24.6943240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6943324Z outputs = self.model( 2025-08-14T21:39:24.6943635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6943732Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6944047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6944179Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6944467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6944566Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6944887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6945019Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6945327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6945451Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6945814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6945944Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6945967Z 2025-08-14T21:39:24.6946062Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6946156Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6946295Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6946543Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6946625Z return mod(**inputs) 2025-08-14T21:39:24.6946948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6947031Z outputs = self.model( 2025-08-14T21:39:24.6947354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6947445Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6947756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6947854Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6948131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6948271Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6948593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:24.6949157Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6949440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6949533Z return self.act(input) 2025-08-14T21:39:24.6949546Z 2025-08-14T21:39:24.6949642Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6949744Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6949840Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6949935Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6950045Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6950137Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6950229Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6950340Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6950468Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6950728Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6950812Z return mod(**inputs) 2025-08-14T21:39:24.6951129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6951225Z outputs = self.model( 2025-08-14T21:39:24.6951543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6951635Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6951961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6952177Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6952466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6952567Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6952878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6953006Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6953317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6953450Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6953815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6953979Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6953992Z 2025-08-14T21:39:24.6954125Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6954379Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6954461Z return mod(**inputs) 2025-08-14T21:39:24.6954782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6954863Z outputs = self.model( 2025-08-14T21:39:24.6955247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6955338Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6961882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6961984Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6962267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6962451Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6962765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6962890Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6963205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6963321Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6963689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6963831Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6963843Z 2025-08-14T21:39:24.6963942Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6964041Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6964134Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6964228Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6964326Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6964420Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6964515Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6964614Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6964737Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6964997Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6965081Z return mod(**inputs) 2025-08-14T21:39:24.6965390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6965481Z outputs = self.model( 2025-08-14T21:39:24.6965837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6965930Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6966251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6966340Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6966627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6966727Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6967037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6967179Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6967488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6967611Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6967991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6968154Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6968167Z 2025-08-14T21:39:24.6968298Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6968545Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6968626Z return mod(**inputs) 2025-08-14T21:39:24.6968947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6969029Z outputs = self.model( 2025-08-14T21:39:24.6969356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6969453Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6969817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6969962Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6970317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6970418Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6970735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6970870Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6971185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6971302Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6971668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6971808Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6971821Z 2025-08-14T21:39:24.6971921Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6972024Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6972149Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6972399Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6972492Z return mod(**inputs) 2025-08-14T21:39:24.6972804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6972886Z outputs = self.model( 2025-08-14T21:39:24.6973208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6973299Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6973660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6973753Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6974032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6974140Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6974449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:24.6974599Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6974879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6974965Z return self.act(input) 2025-08-14T21:39:24.6974978Z 2025-08-14T21:39:24.6975081Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6975177Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6975267Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6975364Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6975454Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6975547Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6975646Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6975738Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6975914Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6976169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6976250Z return mod(**inputs) 2025-08-14T21:39:24.6976568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6976650Z outputs = self.model( 2025-08-14T21:39:24.6976963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6977062Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6977418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6977516Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6977792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6977891Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6978205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6978326Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6978634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6978757Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6979123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6979294Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6979306Z 2025-08-14T21:39:24.6979429Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6979676Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6979766Z return mod(**inputs) 2025-08-14T21:39:24.6980079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6980170Z outputs = self.model( 2025-08-14T21:39:24.6980479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6980568Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6980886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6981018Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6981302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6981404Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6981715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.6981845Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.6982151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6982266Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6982634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6982771Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6982783Z 2025-08-14T21:39:24.6982886Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6982986Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6983082Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6983180Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6983272Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6983364Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6983462Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6983552Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6983678Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6983937Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6984019Z return mod(**inputs) 2025-08-14T21:39:24.6984397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6984485Z outputs = self.model( 2025-08-14T21:39:24.6989088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6989190Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6989503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6989592Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6989877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6989979Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6990297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6990429Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6990744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6990872Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6991239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.6991408Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.6991420Z 2025-08-14T21:39:24.6991545Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6991799Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6991889Z return mod(**inputs) 2025-08-14T21:39:24.6992202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6992286Z outputs = self.model( 2025-08-14T21:39:24.6992650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6992740Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6993063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6993152Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6993435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6993538Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6993850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.6993981Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.6994297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.6994415Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.6994791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.6994919Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.6994931Z 2025-08-14T21:39:24.6995026Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6995127Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6995252Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6995507Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6995587Z return mod(**inputs) 2025-08-14T21:39:24.6995900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6995989Z outputs = self.model( 2025-08-14T21:39:24.6996301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.6996406Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.6996770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.6996860Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.6997147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.6997244Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.6997553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:24.6997704Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.6997972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.6998056Z return self.act(input) 2025-08-14T21:39:24.6998082Z 2025-08-14T21:39:24.6998177Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6998269Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6998370Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6998460Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6998550Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6998669Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6998789Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6998884Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.6999016Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.6999335Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.6999423Z return mod(**inputs) 2025-08-14T21:39:24.6999735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.6999863Z outputs = self.model( 2025-08-14T21:39:24.7000181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.7000273Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.7000583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.7000681Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.7000956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.7001113Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.7001457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.7001578Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.7001893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.7002014Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.7002381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.7002545Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.7002558Z 2025-08-14T21:39:24.7002683Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.7002992Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.7003076Z return mod(**inputs) 2025-08-14T21:39:24.7003390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.7003479Z outputs = self.model( 2025-08-14T21:39:24.7003791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.7003898Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.7004251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.7004340Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.7004623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.7004718Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.7005030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:24.7005160Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:24.7005469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.7005590Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.7005960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.7006093Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.7006109Z 2025-08-14T21:39:24.7006216Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.7006309Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.7006410Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.7006506Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.7006598Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.7006697Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.7006788Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.7006879Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.7007010Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.7007260Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.7007386Z return mod(**inputs) 2025-08-14T21:39:24.7007704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.7007790Z outputs = self.model( 2025-08-14T21:39:24.7008113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.7008203Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.7008516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.7008616Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.7008898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.7008997Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.7009316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.7009450Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.7009772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.7009889Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.7010254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:24.7010423Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:24.7010435Z 2025-08-14T21:39:24.7010561Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.7010819Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.7010902Z return mod(**inputs) 2025-08-14T21:39:24.7011215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.7011314Z outputs = self.model( 2025-08-14T21:39:24.7011702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.7011798Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.7012118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.7012211Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.7012497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.7012598Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.7012911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:39:24.7013057Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:39:24.7013430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:24.7017801Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:24.7018173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:24.7018313Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:24.7018326Z 2025-08-14T21:39:24.7018433Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.7018528Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.7018659Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.7018924Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.7019012Z return mod(**inputs) 2025-08-14T21:39:24.7019337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:39:24.7019478Z outputs = self.model( 2025-08-14T21:39:24.7019809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:39:24.7019914Z decoder_outputs = self.decoder( 2025-08-14T21:39:24.7020227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:24.7020320Z layer_outputs = decoder_layer( 2025-08-14T21:39:24.7020608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:24.7020705Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:24.7021020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:24.7021164Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:24.7021435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:24.7021531Z return self.act(input) 2025-08-14T21:39:24.7021543Z 2025-08-14T21:39:24.7021643Z cudagraph partition due to non gpu ops 2025-08-14T21:39:24.7021779Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.7022029Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.7022114Z return mod(**inputs) 2025-08-14T21:39:24.7022432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1490, in forward 2025-08-14T21:39:24.7022530Z lm_logits = self.lm_head(outputs[0]) 2025-08-14T21:39:24.7022542Z 2025-08-14T21:39:24.7022665Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:24.7022921Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:24.7023000Z return mod(**inputs) 2025-08-14T21:39:24.7023321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1497, in forward 2025-08-14T21:39:24.7023585Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:39:24.7023599Z 2025-08-14T21:39:38.9099087Z Compilation time (from dynamo_timed): 42.149315595 2025-08-14T21:39:38.9324027Z pass 2025-08-14T21:39:38.9324980Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:39:38.9326768Z TIMING: _recursive_pre_grad_passes:0.13339 _recursive_joint_graph_passes:1.52497 _recursive_post_grad_passes:0.25563 async_compile.wait:1.06664 code_gen:10.87333 inductor_compile:17.79047 backend_compile:33.87903 gc:0.00041 entire_frame_compile:42.14932 total_wall_time:42.14932 2025-08-14T21:39:38.9328785Z STATS: call_* op count: 980 | FakeTensorMode.__torch_dispatch__:63398 | FakeTensor.__torch_dispatch__:9772 | ProxyTorchDispatchMode.__torch_dispatch__:13946 2025-08-14T21:39:38.9330040Z Dynamo produced 1 graphs covering 980 ops with 0 graph breaks (0 unique) 2025-08-14T21:39:46.6264544Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:39:46.6265614Z from pkg_resources import resource_filename 2025-08-14T21:39:47.3977249Z 2025-08-14T21:39:49.5115774Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:39:49.5116143Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:39:49.5136494Z cpu eval BertForMaskedLM 2025-08-14T21:39:50.4122803Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:39:50.9239915Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:39:51.4156502Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:40:06.5264243Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5264701Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5265076Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5265408Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5265768Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5266066Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5266306Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5266546Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5266798Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5267039Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5267281Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5267524Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5267773Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5268021Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5268263Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5271990Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5272309Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5272604Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5272935Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5273263Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5273841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5274323Z return mod(**inputs) 2025-08-14T21:40:06.5274901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5275451Z outputs = self.bert( 2025-08-14T21:40:06.5276006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5276585Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5277129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5278023Z layer_outputs = layer_module( 2025-08-14T21:40:06.5278546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5279120Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5279721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:06.5284564Z self_attention_outputs = self.attention( 2025-08-14T21:40:06.5285053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5285532Z return func(*args, **kwargs) 2025-08-14T21:40:06.5285987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:06.5286500Z self_outputs = self.self( 2025-08-14T21:40:06.5286985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5287448Z return func(*args, **kwargs) 2025-08-14T21:40:06.5287902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:06.5288450Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:06.5288690Z 2025-08-14T21:40:06.5288804Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5289133Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5289458Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5290026Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5290431Z return mod(**inputs) 2025-08-14T21:40:06.5290989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5291469Z outputs = self.bert( 2025-08-14T21:40:06.5291921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5292450Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5292974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5293457Z layer_outputs = layer_module( 2025-08-14T21:40:06.5293887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5294342Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5294913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:06.5295484Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:06.5296006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:06.5296497Z return forward_fn(*input_tensors) 2025-08-14T21:40:06.5297009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:06.5297584Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:06.5298107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:06.5298650Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:06.5299124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:06.5299636Z return self.act(input) 2025-08-14T21:40:06.5299800Z 2025-08-14T21:40:06.5299937Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5300271Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5300545Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5300906Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5301279Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5301566Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5301846Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5302128Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5302505Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5303017Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5303479Z return mod(**inputs) 2025-08-14T21:40:06.5303943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5304409Z outputs = self.bert( 2025-08-14T21:40:06.5304853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5305424Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5305896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5306433Z layer_outputs = layer_module( 2025-08-14T21:40:06.5306866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5307319Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5307795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:06.5308331Z self_attention_outputs = self.attention( 2025-08-14T21:40:06.5308810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5317854Z return func(*args, **kwargs) 2025-08-14T21:40:06.5318465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:06.5319105Z self_outputs = self.self( 2025-08-14T21:40:06.5319694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5320166Z return func(*args, **kwargs) 2025-08-14T21:40:06.5320625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:06.5321238Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:06.5321474Z 2025-08-14T21:40:06.5321580Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5321825Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5322115Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5322563Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5322963Z return mod(**inputs) 2025-08-14T21:40:06.5323413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5323979Z outputs = self.bert( 2025-08-14T21:40:06.5324451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5324920Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5325388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5325864Z layer_outputs = layer_module( 2025-08-14T21:40:06.5326289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5326737Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5327220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:06.5327708Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:06.5328305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:06.5328811Z return forward_fn(*input_tensors) 2025-08-14T21:40:06.5329320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:06.5329890Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:06.5330415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:06.5330931Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:06.5331411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:06.5331836Z return self.act(input) 2025-08-14T21:40:06.5331988Z 2025-08-14T21:40:06.5332089Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5332391Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5332641Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5332875Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5333120Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5333361Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5333601Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5333840Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5334123Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5334562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5334965Z return mod(**inputs) 2025-08-14T21:40:06.5335412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5335940Z outputs = self.bert( 2025-08-14T21:40:06.5336373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5336847Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5337315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5337780Z layer_outputs = layer_module( 2025-08-14T21:40:06.5338300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5338806Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5339287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:06.5339769Z self_attention_outputs = self.attention( 2025-08-14T21:40:06.5340254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5340719Z return func(*args, **kwargs) 2025-08-14T21:40:06.5341182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:06.5341650Z self_outputs = self.self( 2025-08-14T21:40:06.5342107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5342573Z return func(*args, **kwargs) 2025-08-14T21:40:06.5343024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:06.5343576Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:06.5343828Z 2025-08-14T21:40:06.5343929Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5344194Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5344480Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5344938Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5345426Z return mod(**inputs) 2025-08-14T21:40:06.5345870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5346347Z outputs = self.bert( 2025-08-14T21:40:06.5346792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5347282Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5347754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5348239Z layer_outputs = layer_module( 2025-08-14T21:40:06.5349070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5349563Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5350053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:06.5350569Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:06.5351078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:06.5351564Z return forward_fn(*input_tensors) 2025-08-14T21:40:06.5352073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:06.5358890Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:06.5359432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:06.5359948Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:06.5360567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:06.5360991Z return self.act(input) 2025-08-14T21:40:06.5361216Z 2025-08-14T21:40:06.5361320Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5361573Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5361821Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5362064Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5362302Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5362539Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5362785Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5363021Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5363300Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5363751Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5364152Z return mod(**inputs) 2025-08-14T21:40:06.5364599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5365070Z outputs = self.bert( 2025-08-14T21:40:06.5365517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5365987Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5366456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5366929Z layer_outputs = layer_module( 2025-08-14T21:40:06.5367440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5367933Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5368412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:06.5368912Z self_attention_outputs = self.attention( 2025-08-14T21:40:06.5370617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5371112Z return func(*args, **kwargs) 2025-08-14T21:40:06.5371579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:06.5372043Z self_outputs = self.self( 2025-08-14T21:40:06.5372498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5372970Z return func(*args, **kwargs) 2025-08-14T21:40:06.5373424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:06.5374035Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:06.5374279Z 2025-08-14T21:40:06.5374385Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5374644Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5374923Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5375382Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5375791Z return mod(**inputs) 2025-08-14T21:40:06.5376241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5376706Z outputs = self.bert( 2025-08-14T21:40:06.5377145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5377624Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5378086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5378560Z layer_outputs = layer_module( 2025-08-14T21:40:06.5379044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5379497Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5379971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:06.5380459Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:06.5380963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:06.5381453Z return forward_fn(*input_tensors) 2025-08-14T21:40:06.5386249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:06.5386822Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:06.5387354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:06.5387876Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:06.5388355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:06.5388780Z return self.act(input) 2025-08-14T21:40:06.5388918Z 2025-08-14T21:40:06.5389023Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5389273Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5389522Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5389768Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5390003Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5390245Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5390490Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5390724Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5391008Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5391458Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5403250Z return mod(**inputs) 2025-08-14T21:40:06.5403936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5404439Z outputs = self.bert( 2025-08-14T21:40:06.5404904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5405403Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5405882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5406367Z layer_outputs = layer_module( 2025-08-14T21:40:06.5406812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5407268Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5407771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:06.5408264Z self_attention_outputs = self.attention( 2025-08-14T21:40:06.5408756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5409222Z return func(*args, **kwargs) 2025-08-14T21:40:06.5409692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:06.5410168Z self_outputs = self.self( 2025-08-14T21:40:06.5414964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5415431Z return func(*args, **kwargs) 2025-08-14T21:40:06.5415897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:06.5416517Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:06.5416753Z 2025-08-14T21:40:06.5416856Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5417121Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5417414Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5417874Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5418278Z return mod(**inputs) 2025-08-14T21:40:06.5418740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5419217Z outputs = self.bert( 2025-08-14T21:40:06.5419735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5420216Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5420692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5421186Z layer_outputs = layer_module( 2025-08-14T21:40:06.5421617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5422076Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5422563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:06.5423046Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:06.5423561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:06.5424058Z return forward_fn(*input_tensors) 2025-08-14T21:40:06.5424579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:06.5425219Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:06.5425813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:06.5426397Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:06.5426877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:06.5427300Z return self.act(input) 2025-08-14T21:40:06.5427449Z 2025-08-14T21:40:06.5427551Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5427811Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5428054Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5428306Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5428554Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5428789Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5429037Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5429283Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5429594Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5430073Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5430485Z return mod(**inputs) 2025-08-14T21:40:06.5430936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5431400Z outputs = self.bert( 2025-08-14T21:40:06.5431840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5432324Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5432799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5433267Z layer_outputs = layer_module( 2025-08-14T21:40:06.5433699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5434218Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5434696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:06.5435186Z self_attention_outputs = self.attention( 2025-08-14T21:40:06.5435666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5436135Z return func(*args, **kwargs) 2025-08-14T21:40:06.5436589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:06.5437065Z self_outputs = self.self( 2025-08-14T21:40:06.5437523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5437983Z return func(*args, **kwargs) 2025-08-14T21:40:06.5438438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:06.5438994Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:06.5439231Z 2025-08-14T21:40:06.5439344Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5443775Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5444074Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5444528Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5444930Z return mod(**inputs) 2025-08-14T21:40:06.5445380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5445857Z outputs = self.bert( 2025-08-14T21:40:06.5446301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5446779Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5447252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5447782Z layer_outputs = layer_module( 2025-08-14T21:40:06.5448211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5449013Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5449537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:06.5450039Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:06.5450541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:06.5451043Z return forward_fn(*input_tensors) 2025-08-14T21:40:06.5451559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:06.5452146Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:06.5452679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:06.5453206Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:06.5453687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:06.5454182Z return self.act(input) 2025-08-14T21:40:06.5454332Z 2025-08-14T21:40:06.5454448Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5454744Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5454999Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5455239Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5455484Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5455739Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5456118Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5456373Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5456656Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5457110Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5457525Z return mod(**inputs) 2025-08-14T21:40:06.5457980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5458451Z outputs = self.bert( 2025-08-14T21:40:06.5458955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5459432Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5459891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5460365Z layer_outputs = layer_module( 2025-08-14T21:40:06.5460797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5461248Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5461725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:06.5462209Z self_attention_outputs = self.attention( 2025-08-14T21:40:06.5462692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5463155Z return func(*args, **kwargs) 2025-08-14T21:40:06.5463599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:06.5464063Z self_outputs = self.self( 2025-08-14T21:40:06.5464505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5464960Z return func(*args, **kwargs) 2025-08-14T21:40:06.5465412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:06.5466028Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:06.5466266Z 2025-08-14T21:40:06.5466369Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5466613Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5466895Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5467339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5467735Z return mod(**inputs) 2025-08-14T21:40:06.5468177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5476916Z outputs = self.bert( 2025-08-14T21:40:06.5477496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5478127Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5478742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5479380Z layer_outputs = layer_module( 2025-08-14T21:40:06.5479839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5480288Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5480768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:06.5481330Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:06.5481826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:06.5482320Z return forward_fn(*input_tensors) 2025-08-14T21:40:06.5482907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:06.5485704Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:06.5486232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:06.5486754Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:06.5487228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:06.5487681Z return self.act(input) 2025-08-14T21:40:06.5487828Z 2025-08-14T21:40:06.5487929Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5488184Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5488429Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5488670Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5488924Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5489174Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5489414Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5489660Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5489946Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5490383Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5490788Z return mod(**inputs) 2025-08-14T21:40:06.5491236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5491721Z outputs = self.bert( 2025-08-14T21:40:06.5492179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5492660Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5493131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5493601Z layer_outputs = layer_module( 2025-08-14T21:40:06.5494086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5494543Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5495022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:06.5495500Z self_attention_outputs = self.attention( 2025-08-14T21:40:06.5495974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5496435Z return func(*args, **kwargs) 2025-08-14T21:40:06.5496884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:06.5497381Z self_outputs = self.self( 2025-08-14T21:40:06.5497911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5498427Z return func(*args, **kwargs) 2025-08-14T21:40:06.5498878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:06.5499419Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:06.5499654Z 2025-08-14T21:40:06.5499753Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5500007Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5500292Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5500732Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5501125Z return mod(**inputs) 2025-08-14T21:40:06.5501568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5502089Z outputs = self.bert( 2025-08-14T21:40:06.5502521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5503004Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5503472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5503944Z layer_outputs = layer_module( 2025-08-14T21:40:06.5504362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5504810Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5505286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:06.5505774Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:06.5506267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:06.5506766Z return forward_fn(*input_tensors) 2025-08-14T21:40:06.5507275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:06.5507831Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:06.5508362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:06.5508877Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:06.5509345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:06.5509760Z return self.act(input) 2025-08-14T21:40:06.5509906Z 2025-08-14T21:40:06.5510004Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5510260Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5510504Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5510753Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5511005Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5511305Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5511634Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5511870Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5516425Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5516879Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5517286Z return mod(**inputs) 2025-08-14T21:40:06.5517727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5518202Z outputs = self.bert( 2025-08-14T21:40:06.5518646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5519118Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5519590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5520069Z layer_outputs = layer_module( 2025-08-14T21:40:06.5520498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5520943Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5521500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:06.5521992Z self_attention_outputs = self.attention( 2025-08-14T21:40:06.5522459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5522925Z return func(*args, **kwargs) 2025-08-14T21:40:06.5523390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:06.5523928Z self_outputs = self.self( 2025-08-14T21:40:06.5524369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5524830Z return func(*args, **kwargs) 2025-08-14T21:40:06.5525287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:06.5525829Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:06.5526061Z 2025-08-14T21:40:06.5526156Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5526412Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5526774Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5527257Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5527656Z return mod(**inputs) 2025-08-14T21:40:06.5528101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5528567Z outputs = self.bert( 2025-08-14T21:40:06.5529000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5529476Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5529937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5530400Z layer_outputs = layer_module( 2025-08-14T21:40:06.5530828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5531272Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5531745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:06.5532229Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:06.5532729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:06.5533305Z return forward_fn(*input_tensors) 2025-08-14T21:40:06.5533831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:06.5534405Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:06.5534934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:06.5535454Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:06.5535922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:06.5536348Z return self.act(input) 2025-08-14T21:40:06.5536486Z 2025-08-14T21:40:06.5536595Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5536846Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5537090Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5537355Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5537602Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5537838Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5538087Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5538327Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5538597Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5539043Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5539447Z return mod(**inputs) 2025-08-14T21:40:06.5539884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5540358Z outputs = self.bert( 2025-08-14T21:40:06.5540803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5545588Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5546062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5546540Z layer_outputs = layer_module( 2025-08-14T21:40:06.5546982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5547446Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5547920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:06.5548409Z self_attention_outputs = self.attention( 2025-08-14T21:40:06.5549248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5549713Z return func(*args, **kwargs) 2025-08-14T21:40:06.5550171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:06.5550650Z self_outputs = self.self( 2025-08-14T21:40:06.5551101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5551555Z return func(*args, **kwargs) 2025-08-14T21:40:06.5552008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:06.5552555Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:06.5552785Z 2025-08-14T21:40:06.5552883Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5553142Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5553423Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5553867Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5554268Z return mod(**inputs) 2025-08-14T21:40:06.5554849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5555319Z outputs = self.bert( 2025-08-14T21:40:06.5555824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5556359Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5556823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5557291Z layer_outputs = layer_module( 2025-08-14T21:40:06.5557711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5558159Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5558633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:06.5559119Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:06.5559628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:06.5560169Z return forward_fn(*input_tensors) 2025-08-14T21:40:06.5560680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:06.5561310Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:06.5561837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:06.5562355Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:06.5562827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:06.5563325Z return self.act(input) 2025-08-14T21:40:06.5563472Z 2025-08-14T21:40:06.5563570Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5563826Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5564071Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5564313Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5564556Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5564791Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5565035Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5565283Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5565571Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5566017Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5566428Z return mod(**inputs) 2025-08-14T21:40:06.5566868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5567331Z outputs = self.bert( 2025-08-14T21:40:06.5567770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5568255Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5568724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5569189Z layer_outputs = layer_module( 2025-08-14T21:40:06.5569611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5574233Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5574717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:06.5575209Z self_attention_outputs = self.attention( 2025-08-14T21:40:06.5575684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5576154Z return func(*args, **kwargs) 2025-08-14T21:40:06.5576665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:06.5577136Z self_outputs = self.self( 2025-08-14T21:40:06.5577578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5578034Z return func(*args, **kwargs) 2025-08-14T21:40:06.5578477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:06.5579017Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:06.5579248Z 2025-08-14T21:40:06.5579353Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5579600Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5579886Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5580336Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5580732Z return mod(**inputs) 2025-08-14T21:40:06.5581175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5581642Z outputs = self.bert( 2025-08-14T21:40:06.5582082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5582548Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5583011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5583492Z layer_outputs = layer_module( 2025-08-14T21:40:06.5583929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5584428Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5585037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:06.5585534Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:06.5586033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:06.5586531Z return forward_fn(*input_tensors) 2025-08-14T21:40:06.5587041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:06.5587615Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:06.5588142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:06.5588663Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:06.5589167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:06.5589611Z return self.act(input) 2025-08-14T21:40:06.5589750Z 2025-08-14T21:40:06.5589850Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5590113Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5590359Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5590607Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5590857Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5591098Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5591333Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5591577Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5591857Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5592301Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5592696Z return mod(**inputs) 2025-08-14T21:40:06.5593143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5593615Z outputs = self.bert( 2025-08-14T21:40:06.5594116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5594594Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5595062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5595535Z layer_outputs = layer_module( 2025-08-14T21:40:06.5595957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5596398Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5596875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:06.5597359Z self_attention_outputs = self.attention( 2025-08-14T21:40:06.5597834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5598305Z return func(*args, **kwargs) 2025-08-14T21:40:06.5598761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:06.5603479Z self_outputs = self.self( 2025-08-14T21:40:06.5603927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:06.5604398Z return func(*args, **kwargs) 2025-08-14T21:40:06.5604848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:06.5605398Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:06.5605640Z 2025-08-14T21:40:06.5605738Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5606067Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5606343Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5606790Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5607197Z return mod(**inputs) 2025-08-14T21:40:06.5607634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:06.5608106Z outputs = self.bert( 2025-08-14T21:40:06.5608549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:06.5609029Z encoder_outputs = self.encoder( 2025-08-14T21:40:06.5609495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:06.5609968Z layer_outputs = layer_module( 2025-08-14T21:40:06.5610403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:06.5610846Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:06.5611329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:06.5611820Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:06.5612320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:06.5612803Z return forward_fn(*input_tensors) 2025-08-14T21:40:06.5613313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:06.5613972Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:06.5614532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:06.5615054Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:06.5615588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:06.5616029Z return self.act(input) 2025-08-14T21:40:06.5616172Z 2025-08-14T21:40:06.5616269Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5616522Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5616768Z cudagraph partition due to non gpu ops 2025-08-14T21:40:06.5617045Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:06.5617482Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:06.5617884Z return mod(**inputs) 2025-08-14T21:40:06.5618380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1328, in forward 2025-08-14T21:40:06.5618991Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:40:06.5619303Z 2025-08-14T21:40:13.6094422Z Compilation time (from dynamo_timed): 20.253622773 2025-08-14T21:40:13.6199190Z pass 2025-08-14T21:40:13.6206989Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:40:13.6208361Z TIMING: _recursive_pre_grad_passes:0.04927 _recursive_joint_graph_passes:0.54761 _recursive_post_grad_passes:0.1087 async_compile.wait:1.0287 code_gen:6.29957 inductor_compile:10.02327 backend_compile:16.2968 gc:0.00241 entire_frame_compile:20.25362 total_wall_time:20.25362 2025-08-14T21:40:13.6209959Z STATS: call_* op count: 289 | FakeTensorMode.__torch_dispatch__:24084 | FakeTensor.__torch_dispatch__:3845 | ProxyTorchDispatchMode.__torch_dispatch__:5315 2025-08-14T21:40:13.6210751Z Dynamo produced 1 graphs covering 289 ops with 0 graph breaks (0 unique) 2025-08-14T21:40:20.5104604Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:40:20.5106114Z from pkg_resources import resource_filename 2025-08-14T21:40:21.3277649Z 2025-08-14T21:40:23.1577163Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:40:23.1577523Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:40:23.1589606Z cpu eval BertForQuestionAnswering 2025-08-14T21:40:24.0420432Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:40:24.4773432Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:40:24.9008550Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:40:39.9697996Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9698403Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9698671Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9698930Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9699199Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9699455Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9699701Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9699944Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9700183Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9700423Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9702508Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9702873Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9703189Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9703489Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9703779Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9704030Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9704298Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9704541Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9704786Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9705447Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9705935Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9706356Z return mod(**inputs) 2025-08-14T21:40:39.9706848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9707353Z outputs = self.bert( 2025-08-14T21:40:39.9707813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9708401Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9708944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9709433Z layer_outputs = layer_module( 2025-08-14T21:40:39.9709870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9710334Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9710826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:39.9711321Z self_attention_outputs = self.attention( 2025-08-14T21:40:39.9711801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9712273Z return func(*args, **kwargs) 2025-08-14T21:40:39.9712790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:39.9713272Z self_outputs = self.self( 2025-08-14T21:40:39.9713727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9714316Z return func(*args, **kwargs) 2025-08-14T21:40:39.9714795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:39.9715349Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:39.9715601Z 2025-08-14T21:40:39.9715722Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9715981Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9716273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9716729Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9717138Z return mod(**inputs) 2025-08-14T21:40:39.9717602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9718073Z outputs = self.bert( 2025-08-14T21:40:39.9718532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9719023Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9719507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9719979Z layer_outputs = layer_module( 2025-08-14T21:40:39.9720412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9720874Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9721442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:39.9721953Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:39.9722474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:39.9723117Z return forward_fn(*input_tensors) 2025-08-14T21:40:39.9723761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:39.9724352Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:39.9724895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:39.9725442Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:39.9725923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:39.9726363Z return self.act(input) 2025-08-14T21:40:39.9726510Z 2025-08-14T21:40:39.9726626Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9726883Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9727146Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9727406Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9727660Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9727898Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9728160Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9728409Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9728691Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9729148Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9729553Z return mod(**inputs) 2025-08-14T21:40:39.9729994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9730483Z outputs = self.bert( 2025-08-14T21:40:39.9730934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9731427Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9731990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9732474Z layer_outputs = layer_module( 2025-08-14T21:40:39.9732918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9733402Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9733883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:39.9734391Z self_attention_outputs = self.attention( 2025-08-14T21:40:39.9734877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9735344Z return func(*args, **kwargs) 2025-08-14T21:40:39.9735816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:39.9736308Z self_outputs = self.self( 2025-08-14T21:40:39.9736764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9743620Z return func(*args, **kwargs) 2025-08-14T21:40:39.9744104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:39.9744659Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:39.9744904Z 2025-08-14T21:40:39.9745097Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9745350Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9745635Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9746089Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9746496Z return mod(**inputs) 2025-08-14T21:40:39.9746946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9747426Z outputs = self.bert( 2025-08-14T21:40:39.9747968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9748453Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9749300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9749794Z layer_outputs = layer_module( 2025-08-14T21:40:39.9750229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9750684Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9751170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:39.9751737Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:39.9752297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:39.9752795Z return forward_fn(*input_tensors) 2025-08-14T21:40:39.9753320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:39.9753893Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:39.9754442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:39.9754975Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:39.9755460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:39.9755890Z return self.act(input) 2025-08-14T21:40:39.9756037Z 2025-08-14T21:40:39.9756135Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9756538Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9756788Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9757038Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9757286Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9757537Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9757776Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9758024Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9758308Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9758759Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9759168Z return mod(**inputs) 2025-08-14T21:40:39.9759623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9760085Z outputs = self.bert( 2025-08-14T21:40:39.9760538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9761025Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9761580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9762050Z layer_outputs = layer_module( 2025-08-14T21:40:39.9762486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9762950Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9763432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:39.9763918Z self_attention_outputs = self.attention( 2025-08-14T21:40:39.9764397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9764862Z return func(*args, **kwargs) 2025-08-14T21:40:39.9765313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:39.9765796Z self_outputs = self.self( 2025-08-14T21:40:39.9770537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9771048Z return func(*args, **kwargs) 2025-08-14T21:40:39.9771502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:39.9772046Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:39.9772285Z 2025-08-14T21:40:39.9772393Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9772641Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9772929Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9773373Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9773782Z return mod(**inputs) 2025-08-14T21:40:39.9774233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9774713Z outputs = self.bert( 2025-08-14T21:40:39.9775160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9775633Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9776102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9776587Z layer_outputs = layer_module( 2025-08-14T21:40:39.9777014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9777462Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9777942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:39.9778534Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:39.9779039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:39.9779540Z return forward_fn(*input_tensors) 2025-08-14T21:40:39.9780048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:39.9780698Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:39.9781289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:39.9781825Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:39.9782306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:39.9782736Z return self.act(input) 2025-08-14T21:40:39.9782878Z 2025-08-14T21:40:39.9782975Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9783234Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9783482Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9783726Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9783979Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9784228Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9784465Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9784713Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9785004Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9785454Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9785853Z return mod(**inputs) 2025-08-14T21:40:39.9786302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9786775Z outputs = self.bert( 2025-08-14T21:40:39.9787211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9787690Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9788224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9788704Z layer_outputs = layer_module( 2025-08-14T21:40:39.9789129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9789576Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9790055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:39.9790540Z self_attention_outputs = self.attention( 2025-08-14T21:40:39.9791024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9791510Z return func(*args, **kwargs) 2025-08-14T21:40:39.9791968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:39.9792436Z self_outputs = self.self( 2025-08-14T21:40:39.9792892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9793355Z return func(*args, **kwargs) 2025-08-14T21:40:39.9793812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:39.9794356Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:39.9794605Z 2025-08-14T21:40:39.9794706Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9794964Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9799518Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9800086Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9800496Z return mod(**inputs) 2025-08-14T21:40:39.9800951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9801495Z outputs = self.bert( 2025-08-14T21:40:39.9801943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9802431Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9802894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9803379Z layer_outputs = layer_module( 2025-08-14T21:40:39.9803861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9804317Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9804794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:39.9805297Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:39.9805811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:39.9806301Z return forward_fn(*input_tensors) 2025-08-14T21:40:39.9806815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:39.9807390Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:39.9807926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:39.9808442Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:39.9808926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:39.9809363Z return self.act(input) 2025-08-14T21:40:39.9809578Z 2025-08-14T21:40:39.9809692Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9810048Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9810314Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9810572Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9810814Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9811061Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9811305Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9811550Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9811834Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9812331Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9812730Z return mod(**inputs) 2025-08-14T21:40:39.9813173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9813645Z outputs = self.bert( 2025-08-14T21:40:39.9814089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9814560Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9815026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9815500Z layer_outputs = layer_module( 2025-08-14T21:40:39.9832362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9832981Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9833510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:39.9834026Z self_attention_outputs = self.attention( 2025-08-14T21:40:39.9834675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9835155Z return func(*args, **kwargs) 2025-08-14T21:40:39.9835643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:39.9836127Z self_outputs = self.self( 2025-08-14T21:40:39.9836578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9837046Z return func(*args, **kwargs) 2025-08-14T21:40:39.9837508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:39.9838062Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:39.9838304Z 2025-08-14T21:40:39.9838409Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9838844Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9839213Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9839666Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9840089Z return mod(**inputs) 2025-08-14T21:40:39.9840550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9841020Z outputs = self.bert( 2025-08-14T21:40:39.9841550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9842035Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9842507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9842976Z layer_outputs = layer_module( 2025-08-14T21:40:39.9843481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9843946Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9844501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:39.9844991Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:39.9845498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:39.9846002Z return forward_fn(*input_tensors) 2025-08-14T21:40:39.9846504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:39.9847082Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:39.9847631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:39.9848161Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:39.9848642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:39.9849530Z return self.act(input) 2025-08-14T21:40:39.9849676Z 2025-08-14T21:40:39.9849794Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9850043Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9850303Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9850552Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9850807Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9851051Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9851307Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9851556Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9851838Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9852297Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9852851Z return mod(**inputs) 2025-08-14T21:40:39.9861443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9862087Z outputs = self.bert( 2025-08-14T21:40:39.9862675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9863308Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9863921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9864492Z layer_outputs = layer_module( 2025-08-14T21:40:39.9864932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9865394Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9865871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:39.9866373Z self_attention_outputs = self.attention( 2025-08-14T21:40:39.9866861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9867320Z return func(*args, **kwargs) 2025-08-14T21:40:39.9870081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:39.9870555Z self_outputs = self.self( 2025-08-14T21:40:39.9871015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9871470Z return func(*args, **kwargs) 2025-08-14T21:40:39.9871937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:39.9872531Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:39.9872767Z 2025-08-14T21:40:39.9872869Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9873126Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9873415Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9873962Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9874360Z return mod(**inputs) 2025-08-14T21:40:39.9874809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9875282Z outputs = self.bert( 2025-08-14T21:40:39.9875719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9876212Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9876683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9877162Z layer_outputs = layer_module( 2025-08-14T21:40:39.9877595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9878054Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9878539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:39.9879029Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:39.9879539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:39.9880036Z return forward_fn(*input_tensors) 2025-08-14T21:40:39.9880551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:39.9881222Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:39.9881762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:39.9882446Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:39.9882974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:39.9883400Z return self.act(input) 2025-08-14T21:40:39.9883548Z 2025-08-14T21:40:39.9883650Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9883907Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9884154Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9884410Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9884666Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9884910Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9885167Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9885419Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9885705Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9886153Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9886558Z return mod(**inputs) 2025-08-14T21:40:39.9887012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9887473Z outputs = self.bert( 2025-08-14T21:40:39.9887918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9888402Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9888875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9889336Z layer_outputs = layer_module( 2025-08-14T21:40:39.9889779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9890222Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9890704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:39.9891249Z self_attention_outputs = self.attention( 2025-08-14T21:40:39.9891724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9892177Z return func(*args, **kwargs) 2025-08-14T21:40:39.9892634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:39.9893103Z self_outputs = self.self( 2025-08-14T21:40:39.9893540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9894000Z return func(*args, **kwargs) 2025-08-14T21:40:39.9894452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:39.9895004Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:39.9895240Z 2025-08-14T21:40:39.9895339Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9895598Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9895884Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9896319Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9901041Z return mod(**inputs) 2025-08-14T21:40:39.9901542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9902019Z outputs = self.bert( 2025-08-14T21:40:39.9902449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9902928Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9903391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9903920Z layer_outputs = layer_module( 2025-08-14T21:40:39.9904360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9904822Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9905297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:39.9905782Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:39.9906285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:39.9906780Z return forward_fn(*input_tensors) 2025-08-14T21:40:39.9907299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:39.9907863Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:39.9908398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:39.9908919Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:39.9909393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:39.9909830Z return self.act(input) 2025-08-14T21:40:39.9909979Z 2025-08-14T21:40:39.9910076Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9910327Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9910575Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9910819Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9911146Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9911391Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9911701Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9911957Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9912230Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9912766Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9913173Z return mod(**inputs) 2025-08-14T21:40:39.9913623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9914088Z outputs = self.bert( 2025-08-14T21:40:39.9914529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9915024Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9915494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9915984Z layer_outputs = layer_module( 2025-08-14T21:40:39.9916424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9916879Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9917352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:39.9917852Z self_attention_outputs = self.attention( 2025-08-14T21:40:39.9918325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9918781Z return func(*args, **kwargs) 2025-08-14T21:40:39.9919243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:39.9919709Z self_outputs = self.self( 2025-08-14T21:40:39.9920155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9920613Z return func(*args, **kwargs) 2025-08-14T21:40:39.9921224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:39.9921776Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:39.9922014Z 2025-08-14T21:40:39.9922199Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9922455Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9922750Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9923200Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9923598Z return mod(**inputs) 2025-08-14T21:40:39.9924037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9924507Z outputs = self.bert( 2025-08-14T21:40:39.9924953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9925425Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9930190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9930667Z layer_outputs = layer_module( 2025-08-14T21:40:39.9931096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9931539Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9932023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:39.9932512Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:39.9933013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:39.9933498Z return forward_fn(*input_tensors) 2025-08-14T21:40:39.9933999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:39.9934568Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:39.9935167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:39.9935688Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:39.9936164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:39.9936584Z return self.act(input) 2025-08-14T21:40:39.9936721Z 2025-08-14T21:40:39.9936821Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9937076Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9937322Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9937564Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9937809Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9938056Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9938314Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9938551Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9938840Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9939288Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9939683Z return mod(**inputs) 2025-08-14T21:40:39.9940211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9940750Z outputs = self.bert( 2025-08-14T21:40:39.9941183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9941667Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9942139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9942680Z layer_outputs = layer_module( 2025-08-14T21:40:39.9943104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9943571Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9944056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:39.9944547Z self_attention_outputs = self.attention( 2025-08-14T21:40:39.9945019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9945485Z return func(*args, **kwargs) 2025-08-14T21:40:39.9945945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:39.9946417Z self_outputs = self.self( 2025-08-14T21:40:39.9946866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9947342Z return func(*args, **kwargs) 2025-08-14T21:40:39.9947801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:39.9948339Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:39.9948580Z 2025-08-14T21:40:39.9949060Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9949342Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9949621Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9950073Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9950485Z return mod(**inputs) 2025-08-14T21:40:39.9950937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9951400Z outputs = self.bert( 2025-08-14T21:40:39.9951841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9952330Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9952928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9953400Z layer_outputs = layer_module( 2025-08-14T21:40:39.9953831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9954287Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9958920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:39.9959457Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:39.9959965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:39.9960459Z return forward_fn(*input_tensors) 2025-08-14T21:40:39.9960969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:39.9961623Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:39.9962154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:39.9962667Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:39.9963169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:39.9963610Z return self.act(input) 2025-08-14T21:40:39.9963749Z 2025-08-14T21:40:39.9963853Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9964097Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9964355Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9964611Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9964943Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9965185Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9965428Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9965666Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9965951Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9966402Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9966803Z return mod(**inputs) 2025-08-14T21:40:39.9967243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9967719Z outputs = self.bert( 2025-08-14T21:40:39.9968161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9968629Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9969177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9969715Z layer_outputs = layer_module( 2025-08-14T21:40:39.9970150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9970594Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9971075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:39.9971576Z self_attention_outputs = self.attention( 2025-08-14T21:40:39.9972092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9972557Z return func(*args, **kwargs) 2025-08-14T21:40:39.9973014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:39.9973481Z self_outputs = self.self( 2025-08-14T21:40:39.9973927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9974385Z return func(*args, **kwargs) 2025-08-14T21:40:39.9974904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:39.9975453Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:39.9975686Z 2025-08-14T21:40:39.9975786Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9976042Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9976325Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9976759Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9977162Z return mod(**inputs) 2025-08-14T21:40:39.9977608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9978076Z outputs = self.bert( 2025-08-14T21:40:39.9978526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9979006Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9979469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9979933Z layer_outputs = layer_module( 2025-08-14T21:40:39.9980360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9980813Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9981290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:39.9981771Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:39.9982281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:39.9982848Z return forward_fn(*input_tensors) 2025-08-14T21:40:39.9983351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:39.9988181Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:39.9988720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:39.9989245Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:39.9989715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:39.9990144Z return self.act(input) 2025-08-14T21:40:39.9990287Z 2025-08-14T21:40:39.9990400Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9990661Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9990918Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9991164Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9991414Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9991667Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9991916Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9992159Z cudagraph partition due to non gpu ops 2025-08-14T21:40:39.9992430Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:39.9992878Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:39.9993288Z return mod(**inputs) 2025-08-14T21:40:39.9993725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:39.9994199Z outputs = self.bert( 2025-08-14T21:40:39.9994639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:39.9995126Z encoder_outputs = self.encoder( 2025-08-14T21:40:39.9995591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:39.9996137Z layer_outputs = layer_module( 2025-08-14T21:40:39.9996566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:39.9997018Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:39.9997491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:39.9997984Z self_attention_outputs = self.attention( 2025-08-14T21:40:39.9998592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:39.9999052Z return func(*args, **kwargs) 2025-08-14T21:40:39.9999513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:39.9999986Z self_outputs = self.self( 2025-08-14T21:40:40.0000446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:40.0000911Z return func(*args, **kwargs) 2025-08-14T21:40:40.0001442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:40.0001983Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:40.0002215Z 2025-08-14T21:40:40.0002313Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0002563Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0002903Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:40.0003350Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:40.0003743Z return mod(**inputs) 2025-08-14T21:40:40.0004250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:40.0004720Z outputs = self.bert( 2025-08-14T21:40:40.0005157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:40.0005633Z encoder_outputs = self.encoder( 2025-08-14T21:40:40.0006104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:40.0006581Z layer_outputs = layer_module( 2025-08-14T21:40:40.0007003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:40.0007461Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:40.0007943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:40.0008434Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:40.0008937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:40.0009435Z return forward_fn(*input_tensors) 2025-08-14T21:40:40.0009941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:40.0010503Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:40.0011036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:40.0011555Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:40.0012032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:40.0012451Z return self.act(input) 2025-08-14T21:40:40.0021010Z 2025-08-14T21:40:40.0021130Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0021436Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0021718Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0022080Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0022370Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0022647Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0022939Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0023239Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0023574Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:40.0024158Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:40.0024573Z return mod(**inputs) 2025-08-14T21:40:40.0025050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:40.0025520Z outputs = self.bert( 2025-08-14T21:40:40.0025970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:40.0026448Z encoder_outputs = self.encoder( 2025-08-14T21:40:40.0026935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:40.0029547Z layer_outputs = layer_module( 2025-08-14T21:40:40.0029975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:40.0030426Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:40.0030897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:40.0031392Z self_attention_outputs = self.attention( 2025-08-14T21:40:40.0031923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:40.0032460Z return func(*args, **kwargs) 2025-08-14T21:40:40.0032907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:40.0033382Z self_outputs = self.self( 2025-08-14T21:40:40.0033825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:40.0034291Z return func(*args, **kwargs) 2025-08-14T21:40:40.0034734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:40.0035275Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:40.0035511Z 2025-08-14T21:40:40.0035618Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0035865Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0036151Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:40.0036590Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:40.0036993Z return mod(**inputs) 2025-08-14T21:40:40.0037433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:40:40.0037906Z outputs = self.bert( 2025-08-14T21:40:40.0038343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:40.0038814Z encoder_outputs = self.encoder( 2025-08-14T21:40:40.0039283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:40.0039753Z layer_outputs = layer_module( 2025-08-14T21:40:40.0040183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:40.0040626Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:40.0041221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:40.0041809Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:40.0042416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:40.0042915Z return forward_fn(*input_tensors) 2025-08-14T21:40:40.0043430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:40.0044002Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:40.0044531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:40.0045054Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:40.0045533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:40.0045969Z return self.act(input) 2025-08-14T21:40:40.0046111Z 2025-08-14T21:40:40.0046212Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0046480Z cudagraph partition due to non gpu ops 2025-08-14T21:40:40.0046764Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:40.0047210Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:40.0047611Z return mod(**inputs) 2025-08-14T21:40:40.0048057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1799, in forward 2025-08-14T21:40:40.0048575Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:40:40.0049143Z 2025-08-14T21:40:40.0049281Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:40.0049735Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:40.0050299Z return mod(**inputs) 2025-08-14T21:40:40.0050737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1800, in forward 2025-08-14T21:40:40.0051262Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:40:40.0051459Z 2025-08-14T21:40:45.7487889Z Compilation time (from dynamo_timed): 19.044046188 2025-08-14T21:40:45.7488253Z pass 2025-08-14T21:40:45.7488634Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:40:45.7489624Z TIMING: _recursive_pre_grad_passes:0.05056 _recursive_joint_graph_passes:0.53979 _recursive_post_grad_passes:0.12078 async_compile.wait:0.00362 code_gen:5.15734 inductor_compile:8.92065 backend_compile:15.17963 gc:0.00021 entire_frame_compile:19.04405 total_wall_time:19.04405 2025-08-14T21:40:45.7499093Z STATS: call_* op count: 296 | FakeTensorMode.__torch_dispatch__:23997 | FakeTensor.__torch_dispatch__:3869 | ProxyTorchDispatchMode.__torch_dispatch__:5351 2025-08-14T21:40:45.7500008Z Dynamo produced 1 graphs covering 296 ops with 0 graph breaks (0 unique) 2025-08-14T21:40:52.6831516Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:40:52.6832681Z from pkg_resources import resource_filename 2025-08-14T21:40:53.4470016Z 2025-08-14T21:41:24.1215047Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:41:24.1215383Z loading model: 0it [00:30, ?it/s] 2025-08-14T21:41:24.1266015Z cpu eval BlenderbotForCausalLM 2025-08-14T21:41:24.4057368Z Compilation time (from dynamo_timed): 0 2025-08-14T21:41:24.4057716Z pass_due_to_skip 2025-08-14T21:41:24.4061413Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:24.4065762Z TIMING: total_wall_time:0 2025-08-14T21:41:24.4065986Z STATS: call_* op count: 0 2025-08-14T21:41:24.4066298Z Dynamo produced 0 graphs covering 0 ops with 0 graph breaks (0 unique) 2025-08-14T21:41:30.6647782Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:41:30.6649189Z from pkg_resources import resource_filename 2025-08-14T21:41:31.6337915Z 2025-08-14T21:41:32.9688851Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:41:32.9689197Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:41:32.9706853Z cpu eval BlenderbotSmallForCausalLM 2025-08-14T21:41:33.2588526Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:33.4174828Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:33.5653520Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:44.3051426Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3051815Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3052083Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3052335Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3052585Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3052836Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3053191Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3053457Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3053711Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3053963Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3054207Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3054460Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3055069Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3055311Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3055575Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3055873Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3056353Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3056770Z return mod(**inputs) 2025-08-14T21:41:44.3057335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3057910Z outputs = self.model.decoder( 2025-08-14T21:41:44.3058469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3059168Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3059612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3060072Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3060629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3061215Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3061793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3062382Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3062970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:41:44.3063595Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:44.3063840Z 2025-08-14T21:41:44.3063979Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3064435Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3064837Z return mod(**inputs) 2025-08-14T21:41:44.3065478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3066041Z outputs = self.model.decoder( 2025-08-14T21:41:44.3066586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3067128Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3067571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3068033Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3068588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3069185Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3069772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3070353Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3070903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:41:44.3071479Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:41:44.3071685Z 2025-08-14T21:41:44.3071797Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3072100Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3072391Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3072832Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3073369Z return mod(**inputs) 2025-08-14T21:41:44.3078114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3078698Z outputs = self.model.decoder( 2025-08-14T21:41:44.3079239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3079802Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3080245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3080702Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3081304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:41:44.3081939Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:41:44.3082437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:44.3082854Z return self.act(input) 2025-08-14T21:41:44.3083001Z 2025-08-14T21:41:44.3083104Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3083358Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3083603Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3083865Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3084122Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3084368Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3084611Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3084847Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3085131Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3085582Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3085976Z return mod(**inputs) 2025-08-14T21:41:44.3086499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3087126Z outputs = self.model.decoder( 2025-08-14T21:41:44.3087686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3088340Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3088772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3089222Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3089765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3090343Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3090917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3091496Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3092104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:41:44.3092706Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:44.3092948Z 2025-08-14T21:41:44.3093078Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3093520Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3093924Z return mod(**inputs) 2025-08-14T21:41:44.3094439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3094989Z outputs = self.model.decoder( 2025-08-14T21:41:44.3095578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3096124Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3096565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3097022Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3097567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3098146Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3098722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3099307Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3099859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:41:44.3100433Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:41:44.3100636Z 2025-08-14T21:41:44.3100746Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3100996Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3101292Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3101735Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3102134Z return mod(**inputs) 2025-08-14T21:41:44.3106929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3107486Z outputs = self.model.decoder( 2025-08-14T21:41:44.3108027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3108579Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3109008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3109521Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3110077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:41:44.3110676Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:41:44.3111168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:44.3111590Z return self.act(input) 2025-08-14T21:41:44.3111727Z 2025-08-14T21:41:44.3111833Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3112079Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3112326Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3112585Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3112822Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3113065Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3113311Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3113546Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3113826Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3114269Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3114670Z return mod(**inputs) 2025-08-14T21:41:44.3115177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3115730Z outputs = self.model.decoder( 2025-08-14T21:41:44.3116266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3116919Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3117421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3117873Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3118425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3118994Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3119571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3120145Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3120700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:41:44.3121421Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:44.3121669Z 2025-08-14T21:41:44.3121800Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3122255Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3122665Z return mod(**inputs) 2025-08-14T21:41:44.3123181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3123740Z outputs = self.model.decoder( 2025-08-14T21:41:44.3124282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3124821Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3125303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3125756Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3126310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3126931Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3127508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3128081Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3128640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:41:44.3129204Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:41:44.3129413Z 2025-08-14T21:41:44.3129510Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3129763Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3130041Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3130503Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3130909Z return mod(**inputs) 2025-08-14T21:41:44.3131490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3136278Z outputs = self.model.decoder( 2025-08-14T21:41:44.3136826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3137373Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3137803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3138264Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3138824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:41:44.3139489Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:41:44.3139971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:44.3140399Z return self.act(input) 2025-08-14T21:41:44.3140536Z 2025-08-14T21:41:44.3140640Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3140891Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3141128Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3141374Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3141613Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3141846Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3142083Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3142321Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3142590Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3143038Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3143441Z return mod(**inputs) 2025-08-14T21:41:44.3143955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3144505Z outputs = self.model.decoder( 2025-08-14T21:41:44.3145044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3145587Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3146061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3146581Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3147132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3147721Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3148293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3149235Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3149805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:41:44.3150412Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:44.3150648Z 2025-08-14T21:41:44.3150780Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3151228Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3151633Z return mod(**inputs) 2025-08-14T21:41:44.3152141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3152700Z outputs = self.model.decoder( 2025-08-14T21:41:44.3153249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3153807Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3154234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3154687Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3155244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3155824Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3156390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3156971Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3157662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:41:44.3158229Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:41:44.3158444Z 2025-08-14T21:41:44.3158546Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3158814Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3159116Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3159557Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3159973Z return mod(**inputs) 2025-08-14T21:41:44.3160553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3169560Z outputs = self.model.decoder( 2025-08-14T21:41:44.3170289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3171047Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3171616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3172062Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3172621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:41:44.3173235Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:41:44.3173731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:44.3174151Z return self.act(input) 2025-08-14T21:41:44.3174296Z 2025-08-14T21:41:44.3174392Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3174645Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3189096Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3189472Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3189879Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3190309Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3190561Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3190818Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3191110Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3191573Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3191997Z return mod(**inputs) 2025-08-14T21:41:44.3192565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3193139Z outputs = self.model.decoder( 2025-08-14T21:41:44.3193756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3194334Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3194782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3195230Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3195797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3196390Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3196973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3197548Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3198107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:41:44.3198779Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:44.3199018Z 2025-08-14T21:41:44.3199162Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3199678Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3200097Z return mod(**inputs) 2025-08-14T21:41:44.3200617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3201179Z outputs = self.model.decoder( 2025-08-14T21:41:44.3201817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3202368Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3202819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3203291Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3203919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3208761Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3209344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3209920Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3210479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:41:44.3211054Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:41:44.3211271Z 2025-08-14T21:41:44.3211372Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3211632Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3211920Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3212374Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3212840Z return mod(**inputs) 2025-08-14T21:41:44.3213367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3213913Z outputs = self.model.decoder( 2025-08-14T21:41:44.3214455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3215005Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3215443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3215887Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3216442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:41:44.3217063Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:41:44.3217554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:44.3217988Z return self.act(input) 2025-08-14T21:41:44.3218135Z 2025-08-14T21:41:44.3218279Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3218540Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3218859Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3219111Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3219363Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3219602Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3219851Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3220099Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3220381Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3220888Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3221301Z return mod(**inputs) 2025-08-14T21:41:44.3221826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3222425Z outputs = self.model.decoder( 2025-08-14T21:41:44.3222971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3223526Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3223959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3224420Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3224979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3225573Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3226152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3226736Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3227297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:41:44.3227904Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:44.3228138Z 2025-08-14T21:41:44.3228271Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3228723Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3229129Z return mod(**inputs) 2025-08-14T21:41:44.3229645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3230204Z outputs = self.model.decoder( 2025-08-14T21:41:44.3230798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3231352Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3231781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3232234Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3232851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3237673Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3238246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3238836Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3239401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:41:44.3239975Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:41:44.3240182Z 2025-08-14T21:41:44.3240284Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3240546Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3240838Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3241364Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3241772Z return mod(**inputs) 2025-08-14T21:41:44.3242305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3242927Z outputs = self.model.decoder( 2025-08-14T21:41:44.3243466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3244019Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3244456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3244901Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3245461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:41:44.3246071Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:41:44.3246567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:44.3246992Z return self.act(input) 2025-08-14T21:41:44.3247138Z 2025-08-14T21:41:44.3247285Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3247553Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3247866Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3248129Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3248373Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3248621Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3249183Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3249429Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3249721Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3250181Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3250579Z return mod(**inputs) 2025-08-14T21:41:44.3251108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3251833Z outputs = self.model.decoder( 2025-08-14T21:41:44.3252390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3253047Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3253491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3253948Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3254505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3255079Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3255657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3256240Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3256792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:41:44.3257398Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:44.3257646Z 2025-08-14T21:41:44.3257777Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3258222Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3258616Z return mod(**inputs) 2025-08-14T21:41:44.3259142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3259691Z outputs = self.model.decoder( 2025-08-14T21:41:44.3260229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3260770Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3261284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3261790Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3266530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3267111Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3267693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3268270Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3268821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:41:44.3269396Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:41:44.3269608Z 2025-08-14T21:41:44.3269716Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3269973Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3270254Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3270701Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3271106Z return mod(**inputs) 2025-08-14T21:41:44.3271619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3272173Z outputs = self.model.decoder( 2025-08-14T21:41:44.3272715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3273267Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3273698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3274151Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3274709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:41:44.3275368Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:41:44.3275855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:44.3276336Z return self.act(input) 2025-08-14T21:41:44.3276477Z 2025-08-14T21:41:44.3276655Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3276903Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3277152Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3277397Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3277633Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3277882Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3278122Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3278369Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3278639Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3279086Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3279487Z return mod(**inputs) 2025-08-14T21:41:44.3279997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3280589Z outputs = self.model.decoder( 2025-08-14T21:41:44.3281128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3281795Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3282223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3282693Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3283304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3283890Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3284527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3285109Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3285659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:41:44.3286269Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:44.3286516Z 2025-08-14T21:41:44.3286650Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3287093Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3287494Z return mod(**inputs) 2025-08-14T21:41:44.3288014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3288567Z outputs = self.model.decoder( 2025-08-14T21:41:44.3289098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3289642Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3290079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3290527Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3295344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:41:44.3295926Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:41:44.3296502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:41:44.3297134Z attn_output, attn_weights = attention_interface( 2025-08-14T21:41:44.3297686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:41:44.3298263Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:41:44.3298468Z 2025-08-14T21:41:44.3298578Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3298832Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3299106Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3299550Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3299952Z return mod(**inputs) 2025-08-14T21:41:44.3300463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:41:44.3301018Z outputs = self.model.decoder( 2025-08-14T21:41:44.3301564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:41:44.3302109Z layer_outputs = decoder_layer( 2025-08-14T21:41:44.3302540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:44.3302994Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:44.3303545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:41:44.3304150Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:41:44.3304636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:44.3305111Z return self.act(input) 2025-08-14T21:41:44.3305301Z 2025-08-14T21:41:44.3305410Z cudagraph partition due to non gpu ops 2025-08-14T21:41:44.3305773Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3306226Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3306634Z return mod(**inputs) 2025-08-14T21:41:44.3307148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1528, in forward 2025-08-14T21:41:44.3307705Z logits = self.lm_head(outputs[0]) 2025-08-14T21:41:44.3307877Z 2025-08-14T21:41:44.3308006Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:44.3308447Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:44.3308840Z return mod(**inputs) 2025-08-14T21:41:44.3309355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1534, in forward 2025-08-14T21:41:44.3310009Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:41:44.3310263Z 2025-08-14T21:41:49.8247706Z Compilation time (from dynamo_timed): 14.793886459 2025-08-14T21:41:49.8281964Z pass 2025-08-14T21:41:49.8282401Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:49.8283573Z TIMING: _recursive_pre_grad_passes:0.03851 _recursive_joint_graph_passes:0.42033 _recursive_post_grad_passes:0.08637 async_compile.wait:0.93948 code_gen:5.12418 inductor_compile:8.37468 backend_compile:12.63246 gc:0.00065 entire_frame_compile:14.79389 total_wall_time:14.79389 2025-08-14T21:41:49.8284729Z STATS: call_* op count: 252 | FakeTensorMode.__torch_dispatch__:16977 | FakeTensor.__torch_dispatch__:2714 | ProxyTorchDispatchMode.__torch_dispatch__:3847 2025-08-14T21:41:49.8285362Z Dynamo produced 1 graphs covering 252 ops with 0 graph breaks (0 unique) 2025-08-14T21:41:56.0043650Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:41:56.0044852Z from pkg_resources import resource_filename 2025-08-14T21:41:56.7374149Z 2025-08-14T21:41:58.4759961Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:41:58.4760316Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:41:58.4790539Z cpu eval BlenderbotSmallForConditionalGeneration 2025-08-14T21:41:59.0254080Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:59.2941708Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:59.5724117Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:21.2355824Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2356204Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2356529Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2356811Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2357068Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2357310Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2357556Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2357807Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2358081Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2358339Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2358586Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2358835Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2359077Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2359383Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2360018Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2360309Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2360799Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2361276Z return mod(**inputs) 2025-08-14T21:42:21.2361857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2362423Z outputs = self.model( 2025-08-14T21:42:21.2362957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2363571Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2364122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2364688Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2365131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2365591Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2366146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2366723Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2367292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2367881Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2368430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2369034Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2369283Z 2025-08-14T21:42:21.2369423Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2369965Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2370382Z return mod(**inputs) 2025-08-14T21:42:21.2370910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2371459Z outputs = self.model( 2025-08-14T21:42:21.2372012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2372565Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2373113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2373685Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2382551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2383160Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2383848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2384424Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2384996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2385580Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2386147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2386725Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2387000Z 2025-08-14T21:42:21.2387104Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2387368Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2387654Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2388109Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2390720Z return mod(**inputs) 2025-08-14T21:42:21.2391250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2391797Z outputs = self.model( 2025-08-14T21:42:21.2392331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2392901Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2393443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2393994Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2394419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2394878Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2395431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:21.2396040Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2396586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2397112Z return self.act(input) 2025-08-14T21:42:21.2397251Z 2025-08-14T21:42:21.2397361Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2397610Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2397864Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2398113Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2398351Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2398592Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2398902Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2399152Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2399426Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2399878Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2400289Z return mod(**inputs) 2025-08-14T21:42:21.2400805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2401442Z outputs = self.model( 2025-08-14T21:42:21.2401972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2402532Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2403190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2403748Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2404195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2404642Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2405245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2405818Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2406415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2406989Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2407667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2408287Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2408526Z 2025-08-14T21:42:21.2408675Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2409115Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2409530Z return mod(**inputs) 2025-08-14T21:42:21.2410052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2410595Z outputs = self.model( 2025-08-14T21:42:21.2411116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2411700Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2412247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2412796Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2413229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2413683Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2414231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2414791Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2415361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2415935Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2416481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2417060Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2417329Z 2025-08-14T21:42:21.2417490Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2422019Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2422303Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2422754Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2423157Z return mod(**inputs) 2025-08-14T21:42:21.2423671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2424221Z outputs = self.model( 2025-08-14T21:42:21.2424741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2425300Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2425841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2426500Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2426941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2427400Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2427944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:21.2428557Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2429044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2429462Z return self.act(input) 2025-08-14T21:42:21.2429669Z 2025-08-14T21:42:21.2429771Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2430028Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2430284Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2430524Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2430771Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2431020Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2431253Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2431500Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2432021Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2432567Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2432969Z return mod(**inputs) 2025-08-14T21:42:21.2433490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2434038Z outputs = self.model( 2025-08-14T21:42:21.2434548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2435103Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2435646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2436243Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2436670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2437121Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2437674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2438246Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2438806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2439392Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2440030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2440628Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2440876Z 2025-08-14T21:42:21.2441011Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2441533Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2441947Z return mod(**inputs) 2025-08-14T21:42:21.2442463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2443011Z outputs = self.model( 2025-08-14T21:42:21.2443539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2444095Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2444640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2445197Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2445646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2446090Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2451143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2451729Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2452306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2452998Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2453557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2454133Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2454337Z 2025-08-14T21:42:21.2454444Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2454695Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2454988Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2455441Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2455841Z return mod(**inputs) 2025-08-14T21:42:21.2456371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2456925Z outputs = self.model( 2025-08-14T21:42:21.2457450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2457995Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2458540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2459087Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2459511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2459959Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2460511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:21.2461237Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2461719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2462143Z return self.act(input) 2025-08-14T21:42:21.2462369Z 2025-08-14T21:42:21.2462467Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2462719Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2462956Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2463201Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2463439Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2463676Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2463919Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2464172Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2464442Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2464944Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2465351Z return mod(**inputs) 2025-08-14T21:42:21.2465878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2466414Z outputs = self.model( 2025-08-14T21:42:21.2466932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2467478Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2468012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2468563Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2468997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2469450Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2469995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2470616Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2471188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2471768Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2472317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2472920Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2473151Z 2025-08-14T21:42:21.2473292Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2473736Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2474141Z return mod(**inputs) 2025-08-14T21:42:21.2474667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2475263Z outputs = self.model( 2025-08-14T21:42:21.2479982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2480538Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2481080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2481698Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2482130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2482584Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2483138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2483711Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2484343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2484921Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2485475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2486037Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2486248Z 2025-08-14T21:42:21.2486349Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2486604Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2486886Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2487325Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2487736Z return mod(**inputs) 2025-08-14T21:42:21.2488258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2488794Z outputs = self.model( 2025-08-14T21:42:21.2489315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2489920Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2490535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2491074Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2491508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2491957Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2492548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:21.2493151Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2493638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2494117Z return self.act(input) 2025-08-14T21:42:21.2494255Z 2025-08-14T21:42:21.2494352Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2494609Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2494854Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2495095Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2495339Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2495583Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2495829Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2496065Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2496351Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2496797Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2497195Z return mod(**inputs) 2025-08-14T21:42:21.2497719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2498269Z outputs = self.model( 2025-08-14T21:42:21.2498799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2499343Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2499889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2500433Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2500859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2501313Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2501916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2502488Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2503049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2503637Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2504232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2509094Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2509336Z 2025-08-14T21:42:21.2509468Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2509928Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2510343Z return mod(**inputs) 2025-08-14T21:42:21.2510856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2511417Z outputs = self.model( 2025-08-14T21:42:21.2511944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2512501Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2513041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2513596Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2514024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2514530Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2515074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2515645Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2516211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2516788Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2517332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2517902Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2518105Z 2025-08-14T21:42:21.2518208Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2518458Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2518789Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2519307Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2519711Z return mod(**inputs) 2025-08-14T21:42:21.2520237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2520783Z outputs = self.model( 2025-08-14T21:42:21.2521382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2521926Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2522474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2523080Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2523527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2523971Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2524579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:21.2525187Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2525675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2526096Z return self.act(input) 2025-08-14T21:42:21.2526240Z 2025-08-14T21:42:21.2526338Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2526590Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2526832Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2527082Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2527331Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2527577Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2527824Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2528070Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2528354Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2528792Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2529198Z return mod(**inputs) 2025-08-14T21:42:21.2529721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2530254Z outputs = self.model( 2025-08-14T21:42:21.2530773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2531321Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2531861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2532451Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2532889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2533394Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2542478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2543252Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2544015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2544760Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2545307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2545913Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2546153Z 2025-08-14T21:42:21.2546290Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2546739Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2547138Z return mod(**inputs) 2025-08-14T21:42:21.2547678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2550604Z outputs = self.model( 2025-08-14T21:42:21.2551121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2551672Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2552249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2552805Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2553328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2553783Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2554335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2554903Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2555466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2556110Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2556779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2557360Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2557564Z 2025-08-14T21:42:21.2557664Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2557929Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2558216Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2558653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2559054Z return mod(**inputs) 2025-08-14T21:42:21.2559574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2560123Z outputs = self.model( 2025-08-14T21:42:21.2560636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2561192Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2561895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2562489Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2563008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2563460Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2564013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:21.2564658Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2565151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2565580Z return self.act(input) 2025-08-14T21:42:21.2565720Z 2025-08-14T21:42:21.2565824Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2566074Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2566337Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2566602Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2566849Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2567091Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2567335Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2567570Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2567851Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2568294Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2568693Z return mod(**inputs) 2025-08-14T21:42:21.2569203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2569743Z outputs = self.model( 2025-08-14T21:42:21.2570261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2570804Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2571413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2571956Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2572390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2572830Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2573377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2573943Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2574513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2575087Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2575646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2576252Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2576483Z 2025-08-14T21:42:21.2576619Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2581320Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2581730Z return mod(**inputs) 2025-08-14T21:42:21.2582255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2582795Z outputs = self.model( 2025-08-14T21:42:21.2583315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2583922Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2584472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2585016Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2585448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2585900Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2586452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2587026Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2587598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2588179Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2588734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2589308Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2589511Z 2025-08-14T21:42:21.2589619Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2589888Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2590173Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2590621Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2591028Z return mod(**inputs) 2025-08-14T21:42:21.2591657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2592206Z outputs = self.model( 2025-08-14T21:42:21.2592729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2593282Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2594994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2595639Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2596081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2596526Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2597086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:21.2597698Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2598189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2614762Z return self.act(input) 2025-08-14T21:42:21.2614974Z 2025-08-14T21:42:21.2615092Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2615371Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2615625Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2615868Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2616101Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2616337Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2616573Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2616812Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2617107Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2617577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2617993Z return mod(**inputs) 2025-08-14T21:42:21.2618561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2619267Z outputs = self.model( 2025-08-14T21:42:21.2619814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2620449Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2621141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2621707Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2622144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2622612Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2623177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2623773Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2624349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2625002Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2625568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2626177Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2626414Z 2025-08-14T21:42:21.2626551Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2627008Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2627417Z return mod(**inputs) 2025-08-14T21:42:21.2627941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2628499Z outputs = self.model( 2025-08-14T21:42:21.2629089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2629649Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2630191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2630747Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2631193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2631654Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2632208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:21.2632794Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:21.2633373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2633956Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2634509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2639361Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2639574Z 2025-08-14T21:42:21.2639692Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2639945Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2640237Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2640699Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2641118Z return mod(**inputs) 2025-08-14T21:42:21.2641787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2642345Z outputs = self.model( 2025-08-14T21:42:21.2642887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:21.2643439Z encoder_outputs = self.encoder( 2025-08-14T21:42:21.2643996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:21.2644548Z layer_outputs = encoder_layer( 2025-08-14T21:42:21.2644990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2645440Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2646000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:21.2646617Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2647116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2647541Z return self.act(input) 2025-08-14T21:42:21.2647691Z 2025-08-14T21:42:21.2647791Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2648048Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2648289Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2648536Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2649181Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2649457Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2649778Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2650033Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2650326Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2650784Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2651203Z return mod(**inputs) 2025-08-14T21:42:21.2651850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2652396Z outputs = self.model( 2025-08-14T21:42:21.2652924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2653536Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2654088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2654633Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2655075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2655542Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2656095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2656684Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2657270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2657855Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2658413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2659020Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2659266Z 2025-08-14T21:42:21.2659403Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2659851Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2660334Z return mod(**inputs) 2025-08-14T21:42:21.2660862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2661412Z outputs = self.model( 2025-08-14T21:42:21.2661941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2662488Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2663042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2663607Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2672339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2672953Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2673714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2674520Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2675239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2675825Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2676391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2676968Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2677174Z 2025-08-14T21:42:21.2677278Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2677546Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2677808Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2678050Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2678355Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2678722Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2678964Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2679210Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2679496Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2679947Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2680348Z return mod(**inputs) 2025-08-14T21:42:21.2680881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2681519Z outputs = self.model( 2025-08-14T21:42:21.2682038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2682638Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2683194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2683742Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2684174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2684631Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2685206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2685811Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2686415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2687076Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2687641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2688250Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2688483Z 2025-08-14T21:42:21.2688628Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2689071Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2689482Z return mod(**inputs) 2025-08-14T21:42:21.2690003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2690543Z outputs = self.model( 2025-08-14T21:42:21.2691065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2691622Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2692165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2692763Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2693270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2693720Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2694275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2694862Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2695449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2696022Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2696574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2697201Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2697415Z 2025-08-14T21:42:21.2697515Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2697769Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2698049Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2698490Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2698893Z return mod(**inputs) 2025-08-14T21:42:21.2699406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2699956Z outputs = self.model( 2025-08-14T21:42:21.2700479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2701039Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2701583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2702140Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2702576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2703033Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2703578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:21.2704185Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2704678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2705157Z return self.act(input) 2025-08-14T21:42:21.2705305Z 2025-08-14T21:42:21.2705403Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2705658Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2705912Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2706155Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2706398Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2706651Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2706887Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2707142Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2707487Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2714330Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2714749Z return mod(**inputs) 2025-08-14T21:42:21.2715274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2715837Z outputs = self.model( 2025-08-14T21:42:21.2716359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2716922Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2717465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2718014Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2718441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2718890Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2719443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2720016Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2720595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2721296Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2721926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2722593Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2722833Z 2025-08-14T21:42:21.2722984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2723436Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2723837Z return mod(**inputs) 2025-08-14T21:42:21.2724358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2724908Z outputs = self.model( 2025-08-14T21:42:21.2725424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2725975Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2726517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2727062Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2727485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2727981Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2728550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2729127Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2729749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2730328Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2730879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2731442Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2731650Z 2025-08-14T21:42:21.2731748Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2732000Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2732245Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2732485Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2732728Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2732976Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2733211Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2733459Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2733737Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2734180Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2734593Z return mod(**inputs) 2025-08-14T21:42:21.2735114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2735669Z outputs = self.model( 2025-08-14T21:42:21.2736219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2741014Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2741565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2742118Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2742547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2743060Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2743623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2744210Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2744800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2745381Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2745936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2746526Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2746771Z 2025-08-14T21:42:21.2746905Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2747355Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2747764Z return mod(**inputs) 2025-08-14T21:42:21.2748285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2749196Z outputs = self.model( 2025-08-14T21:42:21.2749726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2750279Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2750880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2751505Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2752032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2752474Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2753031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2753629Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2754216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2754833Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2755391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2755964Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2756169Z 2025-08-14T21:42:21.2756276Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2756524Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2756805Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2757254Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2757648Z return mod(**inputs) 2025-08-14T21:42:21.2758165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2758707Z outputs = self.model( 2025-08-14T21:42:21.2759219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2759765Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2760303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2760852Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2761422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2761876Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2762433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:21.2763046Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2763529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2763962Z return self.act(input) 2025-08-14T21:42:21.2764097Z 2025-08-14T21:42:21.2764202Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2764452Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2764709Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2764967Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2765265Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2765504Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2769925Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2770169Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2770447Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2770895Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2771300Z return mod(**inputs) 2025-08-14T21:42:21.2771812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2772357Z outputs = self.model( 2025-08-14T21:42:21.2772882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2773498Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2774033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2774587Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2775022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2775472Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2776022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2776609Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2777185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2777756Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2778335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2778945Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2779176Z 2025-08-14T21:42:21.2779316Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2779798Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2780276Z return mod(**inputs) 2025-08-14T21:42:21.2780797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2781340Z outputs = self.model( 2025-08-14T21:42:21.2781850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2782399Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2782945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2783537Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2784014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2784464Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2785019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2785592Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2786164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2786746Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2787304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2787862Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2788072Z 2025-08-14T21:42:21.2788175Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2788428Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2788669Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2788925Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2789168Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2789415Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2789650Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2789892Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2790175Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2790612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2791083Z return mod(**inputs) 2025-08-14T21:42:21.2791605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2792157Z outputs = self.model( 2025-08-14T21:42:21.2792672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2793224Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2793767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2794366Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2799042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2799498Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2800058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2800648Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2801322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2801920Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2802480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2803074Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2803314Z 2025-08-14T21:42:21.2803444Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2803893Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2804287Z return mod(**inputs) 2025-08-14T21:42:21.2804811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2805359Z outputs = self.model( 2025-08-14T21:42:21.2805926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2806467Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2807009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2807558Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2807991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2808436Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2809041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2809711Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2810298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2810873Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2811431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2812001Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2812201Z 2025-08-14T21:42:21.2812303Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2812552Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2812860Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2813334Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2813782Z return mod(**inputs) 2025-08-14T21:42:21.2814314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2814860Z outputs = self.model( 2025-08-14T21:42:21.2815372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2815924Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2816471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2817019Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2817447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2817900Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2818459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:21.2819067Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2819550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2819974Z return self.act(input) 2025-08-14T21:42:21.2820113Z 2025-08-14T21:42:21.2820219Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2820465Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2820715Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2820965Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2821205Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2821455Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2821698Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2821947Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2822225Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2822671Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2823147Z return mod(**inputs) 2025-08-14T21:42:21.2832216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2832902Z outputs = self.model( 2025-08-14T21:42:21.2833428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2833989Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2834532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2835094Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2835540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2835990Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2836559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2837148Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2837763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2840450Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2841012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2841683Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2841960Z 2025-08-14T21:42:21.2842154Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2842602Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2843009Z return mod(**inputs) 2025-08-14T21:42:21.2843525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2844062Z outputs = self.model( 2025-08-14T21:42:21.2844573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2845127Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2845672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2846261Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2846695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2847150Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2847707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2848280Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2849202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2849787Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2850341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2850910Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2851125Z 2025-08-14T21:42:21.2851226Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2851488Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2851733Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2851983Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2852373Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2852682Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2852932Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2853184Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2853465Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2853910Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2854362Z return mod(**inputs) 2025-08-14T21:42:21.2854882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2855421Z outputs = self.model( 2025-08-14T21:42:21.2855949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2856501Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2857046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2857584Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2858017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2858476Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2859022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2859614Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2860199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2860859Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2861414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2862017Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2862254Z 2025-08-14T21:42:21.2862386Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2862832Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2863232Z return mod(**inputs) 2025-08-14T21:42:21.2863749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2864293Z outputs = self.model( 2025-08-14T21:42:21.2864811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2865357Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2865898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2866445Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2866927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2871551Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2872105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2872698Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2873279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2873860Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2874467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2875043Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2875249Z 2025-08-14T21:42:21.2875350Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2875606Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2875891Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2876331Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2876739Z return mod(**inputs) 2025-08-14T21:42:21.2877254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2877803Z outputs = self.model( 2025-08-14T21:42:21.2878314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2878866Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2879416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2879962Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2880395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2880846Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2881523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:21.2882201Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2882753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2883181Z return self.act(input) 2025-08-14T21:42:21.2883324Z 2025-08-14T21:42:21.2883434Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2883686Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2883943Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2884192Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2884429Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2884676Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2884920Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2885166Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2885509Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2885961Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2886368Z return mod(**inputs) 2025-08-14T21:42:21.2886890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2887440Z outputs = self.model( 2025-08-14T21:42:21.2887963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2888508Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2889055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2889602Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2890035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2890479Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2891038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2891631Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2892258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2892833Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2893388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2893989Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2894223Z 2025-08-14T21:42:21.2894356Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2894801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2895208Z return mod(**inputs) 2025-08-14T21:42:21.2895785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2900580Z outputs = self.model( 2025-08-14T21:42:21.2901110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2901668Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2902213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2902760Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2903206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2903662Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2904211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2904857Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2905438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2906019Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2906568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2907145Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2907354Z 2025-08-14T21:42:21.2907460Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2907721Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2907964Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2908214Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2908463Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2908701Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2908944Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2909191Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2909472Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2909922Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2910374Z return mod(**inputs) 2025-08-14T21:42:21.2910970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2911516Z outputs = self.model( 2025-08-14T21:42:21.2912040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2912593Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2913135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2913687Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2914184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2914688Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2915239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2915834Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2916420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2916999Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2917549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2918155Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2918387Z 2025-08-14T21:42:21.2918531Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2918981Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2919378Z return mod(**inputs) 2025-08-14T21:42:21.2919906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2920450Z outputs = self.model( 2025-08-14T21:42:21.2920959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2921599Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2922143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2922742Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2923174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2923636Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2924194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2924839Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2929650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2930240Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2930800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2931365Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2931576Z 2025-08-14T21:42:21.2931682Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2931943Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2932230Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2932675Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2933084Z return mod(**inputs) 2025-08-14T21:42:21.2933610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2934149Z outputs = self.model( 2025-08-14T21:42:21.2934668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2935221Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2935769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2936310Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2936802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2937258Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2937819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:21.2938420Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2938911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2939395Z return self.act(input) 2025-08-14T21:42:21.2939535Z 2025-08-14T21:42:21.2939699Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2939960Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2940210Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2940455Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2940698Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2940943Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2941186Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2941424Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2941705Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2942150Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2942548Z return mod(**inputs) 2025-08-14T21:42:21.2943065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2943665Z outputs = self.model( 2025-08-14T21:42:21.2944181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2944769Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2945315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2945866Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2946293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2946740Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2947293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2947873Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2948450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2949430Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2950002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2950604Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2950844Z 2025-08-14T21:42:21.2950976Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2951424Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2951833Z return mod(**inputs) 2025-08-14T21:42:21.2952509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2953050Z outputs = self.model( 2025-08-14T21:42:21.2953573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2958366Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2959012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2959570Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2960008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2960466Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2961016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2961675Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2962254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2962839Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2963392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2963535Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2963549Z 2025-08-14T21:42:21.2963646Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2963745Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2963848Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2963947Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2964051Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2964146Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2964240Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2964347Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2964480Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2964799Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2964894Z return mod(**inputs) 2025-08-14T21:42:21.2965290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2965379Z outputs = self.model( 2025-08-14T21:42:21.2965777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2965870Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2966268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2966358Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2966641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2966750Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2967142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2967286Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2967671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2967792Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2968190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2968380Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2968394Z 2025-08-14T21:42:21.2968531Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2968858Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2968944Z return mod(**inputs) 2025-08-14T21:42:21.2969384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2969470Z outputs = self.model( 2025-08-14T21:42:21.2969859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2969959Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2970345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2970440Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2970725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2970823Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2971221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2971359Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2971747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2971865Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2972233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2972427Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2972441Z 2025-08-14T21:42:21.2972539Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2972635Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2972772Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2973072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2973161Z return mod(**inputs) 2025-08-14T21:42:21.2973553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2973637Z outputs = self.model( 2025-08-14T21:42:21.2974031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2974120Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2974517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2974608Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2974891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2975002Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2975397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:21.2975544Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.2975826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.2975912Z return self.act(input) 2025-08-14T21:42:21.2975925Z 2025-08-14T21:42:21.2976027Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2976124Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2976219Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2976318Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2976411Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2976506Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2976608Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2976700Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2976830Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2977137Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2977219Z return mod(**inputs) 2025-08-14T21:42:21.2977614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2977698Z outputs = self.model( 2025-08-14T21:42:21.2978086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2978187Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2978575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2978674Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2978960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2979060Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2979455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2979580Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2979964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2980091Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2980458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.2980675Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.2980688Z 2025-08-14T21:42:21.2980898Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2981152Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2981235Z return mod(**inputs) 2025-08-14T21:42:21.2981634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2981721Z outputs = self.model( 2025-08-14T21:42:21.2982116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2982207Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2982599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2982739Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2983032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2991537Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2992079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.2992221Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.2992766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.2992903Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.2993405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.2993575Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.2993596Z 2025-08-14T21:42:21.2993703Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2993817Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2993984Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2994083Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2994192Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2994285Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2994377Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2994479Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.2994608Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.2994859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.2994949Z return mod(**inputs) 2025-08-14T21:42:21.2995338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.2995432Z outputs = self.model( 2025-08-14T21:42:21.2995823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.2995918Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.2996311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.2996399Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.2996691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.2996789Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.2997199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.2997355Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.2999914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.3000039Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.3000412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.3000575Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.3000589Z 2025-08-14T21:42:21.3000722Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.3000975Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.3001057Z return mod(**inputs) 2025-08-14T21:42:21.3001541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.3001631Z outputs = self.model( 2025-08-14T21:42:21.3002024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.3002121Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.3002507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.3002612Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.3002896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.3002993Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.3003389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.3003521Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.3003919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.3004089Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.3004457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.3004605Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.3004618Z 2025-08-14T21:42:21.3004715Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3004821Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3004951Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.3005202Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.3005290Z return mod(**inputs) 2025-08-14T21:42:21.3005732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.3005822Z outputs = self.model( 2025-08-14T21:42:21.3006222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.3006311Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.3006707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.3006797Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.3007084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.3007191Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.3007584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:21.3007791Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.3008063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.3008152Z return self.act(input) 2025-08-14T21:42:21.3008165Z 2025-08-14T21:42:21.3008274Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3008368Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3008462Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3008562Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3008654Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3008754Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3008846Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3008938Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3009073Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.3009322Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.3009407Z return mod(**inputs) 2025-08-14T21:42:21.3009813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.3009897Z outputs = self.model( 2025-08-14T21:42:21.3010282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.3010379Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.3010764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.3010858Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.3011139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.3011238Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.3011641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.3011852Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.3012326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.3012447Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.3012820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.3012990Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.3013003Z 2025-08-14T21:42:21.3013131Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.3013389Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.3013472Z return mod(**inputs) 2025-08-14T21:42:21.3022175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.3022298Z outputs = self.model( 2025-08-14T21:42:21.3022719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.3022816Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.3023209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.3023319Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.3023606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.3023709Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.3024222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:21.3024354Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:21.3024749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.3024870Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.3025243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.3025390Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.3025405Z 2025-08-14T21:42:21.3025505Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3025614Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3025711Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3025812Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3025915Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3026008Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3026106Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3026255Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3026431Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.3031030Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.3031127Z return mod(**inputs) 2025-08-14T21:42:21.3031519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.3031620Z outputs = self.model( 2025-08-14T21:42:21.3032006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.3032098Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.3032500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.3032653Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.3032946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.3033048Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.3033433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.3033577Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.3033966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.3034087Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.3034467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:21.3034635Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:21.3034649Z 2025-08-14T21:42:21.3034792Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.3035046Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.3035133Z return mod(**inputs) 2025-08-14T21:42:21.3035535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.3035623Z outputs = self.model( 2025-08-14T21:42:21.3036027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.3036121Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.3036566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.3036670Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.3036956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.3037055Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.3037451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:21.3037584Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:21.3037976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:21.3038096Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:21.3038462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:21.3038609Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:21.3038626Z 2025-08-14T21:42:21.3038725Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3038837Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3038966Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.3039219Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.3039312Z return mod(**inputs) 2025-08-14T21:42:21.3039700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:21.3039786Z outputs = self.model( 2025-08-14T21:42:21.3040183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:21.3040279Z decoder_outputs = self.decoder( 2025-08-14T21:42:21.3040765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:21.3040860Z layer_outputs = decoder_layer( 2025-08-14T21:42:21.3041291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:21.3041408Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:21.3041794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:21.3041955Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:21.3042230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:21.3042318Z return self.act(input) 2025-08-14T21:42:21.3042335Z 2025-08-14T21:42:21.3042443Z cudagraph partition due to non gpu ops 2025-08-14T21:42:21.3042574Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.3042831Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.3042923Z return mod(**inputs) 2025-08-14T21:42:21.3043310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1393, in forward 2025-08-14T21:42:21.3043470Z lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias 2025-08-14T21:42:21.3043483Z 2025-08-14T21:42:21.3043612Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:21.3043863Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:21.3043958Z return mod(**inputs) 2025-08-14T21:42:21.3044349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1398, in forward 2025-08-14T21:42:21.3044619Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:42:21.3044633Z 2025-08-14T21:42:29.8036667Z Compilation time (from dynamo_timed): 28.680100046 2025-08-14T21:42:29.8058427Z pass 2025-08-14T21:42:29.8058874Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:29.8060053Z TIMING: _recursive_pre_grad_passes:0.08551 _recursive_joint_graph_passes:0.82785 _recursive_post_grad_passes:0.16774 async_compile.wait:0.90235 code_gen:7.67244 inductor_compile:12.61138 backend_compile:23.59262 gc:0.00019 entire_frame_compile:28.6801 total_wall_time:28.6801 2025-08-14T21:42:29.8061199Z STATS: call_* op count: 652 | FakeTensorMode.__torch_dispatch__:42623 | FakeTensor.__torch_dispatch__:6580 | ProxyTorchDispatchMode.__torch_dispatch__:9376 2025-08-14T21:42:29.8061828Z Dynamo produced 1 graphs covering 652 ops with 0 graph breaks (0 unique) 2025-08-14T21:42:36.3514681Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:42:36.3515751Z from pkg_resources import resource_filename 2025-08-14T21:42:37.0695454Z 2025-08-14T21:42:39.2400839Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:42:39.2401183Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:42:39.2414595Z cpu eval CamemBert 2025-08-14T21:42:40.1247463Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:40.6064092Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:41.0574735Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:55.5760344Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5761336Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5761782Z return mod(**inputs) 2025-08-14T21:42:55.5762320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5762901Z outputs = self.roberta( 2025-08-14T21:42:55.5765528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 886, in forward 2025-08-14T21:42:55.5766060Z embedding_output = self.embeddings( 2025-08-14T21:42:55.5766577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 90, in forward 2025-08-14T21:42:55.5767314Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T21:42:55.5768103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1590, in create_position_ids_from_input_ids 2025-08-14T21:42:55.5768723Z mask = input_ids.ne(padding_idx).int() 2025-08-14T21:42:55.5768900Z 2025-08-14T21:42:55.5769004Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5769265Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5769526Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5769802Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5770054Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5770303Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5770550Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5770789Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5771034Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5771281Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5771645Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5771902Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5772196Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5772666Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5773072Z return mod(**inputs) 2025-08-14T21:42:55.5773567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5774089Z outputs = self.roberta( 2025-08-14T21:42:55.5774571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 886, in forward 2025-08-14T21:42:55.5775089Z embedding_output = self.embeddings( 2025-08-14T21:42:55.5775599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 90, in forward 2025-08-14T21:42:55.5776274Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T21:42:55.5777028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1591, in create_position_ids_from_input_ids 2025-08-14T21:42:55.5777907Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T21:42:55.5778223Z 2025-08-14T21:42:55.5778358Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5778804Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5779200Z return mod(**inputs) 2025-08-14T21:42:55.5779686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5780195Z outputs = self.roberta( 2025-08-14T21:42:55.5780678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 886, in forward 2025-08-14T21:42:55.5781200Z embedding_output = self.embeddings( 2025-08-14T21:42:55.5781778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 90, in forward 2025-08-14T21:42:55.5782454Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T21:42:55.5783200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1591, in create_position_ids_from_input_ids 2025-08-14T21:42:55.5783936Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T21:42:55.5784248Z 2025-08-14T21:42:55.5784349Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5784614Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5784862Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5785128Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5785377Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5785689Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5785947Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5786234Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5786691Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5787092Z return mod(**inputs) 2025-08-14T21:42:55.5787577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5788091Z outputs = self.roberta( 2025-08-14T21:42:55.5788568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5789085Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5789601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5790164Z layer_outputs = layer_module( 2025-08-14T21:42:55.5790603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5791072Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5791589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:42:55.5796397Z self_attention_outputs = self.attention( 2025-08-14T21:42:55.5796879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5797360Z return func(*args, **kwargs) 2025-08-14T21:42:55.5797867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:42:55.5798371Z self_outputs = self.self( 2025-08-14T21:42:55.5798837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5799313Z return func(*args, **kwargs) 2025-08-14T21:42:55.5799810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:42:55.5800390Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:55.5800643Z 2025-08-14T21:42:55.5800744Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5801010Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5801374Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5801839Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5802245Z return mod(**inputs) 2025-08-14T21:42:55.5802731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5803238Z outputs = self.roberta( 2025-08-14T21:42:55.5803814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5804327Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5804833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5805335Z layer_outputs = layer_module( 2025-08-14T21:42:55.5805769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5806273Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5806852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:42:55.5807376Z layer_output = apply_chunking_to_forward( 2025-08-14T21:42:55.5807886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:42:55.5808382Z return forward_fn(*input_tensors) 2025-08-14T21:42:55.5808917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:42:55.5809525Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:42:55.5810091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:42:55.5810643Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:42:55.5811117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:55.5811545Z return self.act(input) 2025-08-14T21:42:55.5811685Z 2025-08-14T21:42:55.5811881Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5812130Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5812384Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5812632Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5812873Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5813121Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5813366Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5813620Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5813892Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5814345Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5814756Z return mod(**inputs) 2025-08-14T21:42:55.5815228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5815740Z outputs = self.roberta( 2025-08-14T21:42:55.5816230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5816737Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5817238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5817744Z layer_outputs = layer_module( 2025-08-14T21:42:55.5818180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5818626Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5819142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:42:55.5819661Z self_attention_outputs = self.attention( 2025-08-14T21:42:55.5820142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5820609Z return func(*args, **kwargs) 2025-08-14T21:42:55.5825409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:42:55.5825983Z self_outputs = self.self( 2025-08-14T21:42:55.5826435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5826913Z return func(*args, **kwargs) 2025-08-14T21:42:55.5827414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:42:55.5827997Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:55.5828231Z 2025-08-14T21:42:55.5828329Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5828582Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5828863Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5829321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5829717Z return mod(**inputs) 2025-08-14T21:42:55.5830204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5830720Z outputs = self.roberta( 2025-08-14T21:42:55.5831194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5831701Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5832197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5832702Z layer_outputs = layer_module( 2025-08-14T21:42:55.5833123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5833577Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5834139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:42:55.5834656Z layer_output = apply_chunking_to_forward( 2025-08-14T21:42:55.5835177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:42:55.5835773Z return forward_fn(*input_tensors) 2025-08-14T21:42:55.5836314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:42:55.5836916Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:42:55.5837485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:42:55.5838041Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:42:55.5838524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:55.5838954Z return self.act(input) 2025-08-14T21:42:55.5839101Z 2025-08-14T21:42:55.5839204Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5839463Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5839711Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5839962Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5840208Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5840458Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5840705Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5840950Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5841312Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5841779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5842187Z return mod(**inputs) 2025-08-14T21:42:55.5842672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5843169Z outputs = self.roberta( 2025-08-14T21:42:55.5843705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5844220Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5844722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5845223Z layer_outputs = layer_module( 2025-08-14T21:42:55.5845659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5846111Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5846622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:42:55.5847145Z self_attention_outputs = self.attention( 2025-08-14T21:42:55.5847627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5848114Z return func(*args, **kwargs) 2025-08-14T21:42:55.5848604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:42:55.5849498Z self_outputs = self.self( 2025-08-14T21:42:55.5850001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5854631Z return func(*args, **kwargs) 2025-08-14T21:42:55.5855110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:42:55.5855687Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:55.5856025Z 2025-08-14T21:42:55.5856129Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5856374Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5856661Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5857112Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5857519Z return mod(**inputs) 2025-08-14T21:42:55.5858046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5858554Z outputs = self.roberta( 2025-08-14T21:42:55.5859036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5859534Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5860038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5860554Z layer_outputs = layer_module( 2025-08-14T21:42:55.5860982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5861431Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5861941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:42:55.5862463Z layer_output = apply_chunking_to_forward( 2025-08-14T21:42:55.5862967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:42:55.5863454Z return forward_fn(*input_tensors) 2025-08-14T21:42:55.5863994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:42:55.5864724Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:42:55.5865285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:42:55.5865841Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:42:55.5866383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:55.5866813Z return self.act(input) 2025-08-14T21:42:55.5866952Z 2025-08-14T21:42:55.5867051Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5867309Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5867554Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5867794Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5868040Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5868295Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5868538Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5868789Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5869072Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5869518Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5869919Z return mod(**inputs) 2025-08-14T21:42:55.5870403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5870910Z outputs = self.roberta( 2025-08-14T21:42:55.5871383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5871893Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5872392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5872896Z layer_outputs = layer_module( 2025-08-14T21:42:55.5873320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5873820Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5874335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:42:55.5874855Z self_attention_outputs = self.attention( 2025-08-14T21:42:55.5875329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5875791Z return func(*args, **kwargs) 2025-08-14T21:42:55.5876281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:42:55.5876769Z self_outputs = self.self( 2025-08-14T21:42:55.5877215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5877675Z return func(*args, **kwargs) 2025-08-14T21:42:55.5878165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:42:55.5878793Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:55.5887427Z 2025-08-14T21:42:55.5887539Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5887833Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5888160Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5888743Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5889264Z return mod(**inputs) 2025-08-14T21:42:55.5889903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5890451Z outputs = self.roberta( 2025-08-14T21:42:55.5890941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5891455Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5891999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5892504Z layer_outputs = layer_module( 2025-08-14T21:42:55.5892933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5893437Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5894015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:42:55.5894541Z layer_output = apply_chunking_to_forward( 2025-08-14T21:42:55.5895043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:42:55.5895533Z return forward_fn(*input_tensors) 2025-08-14T21:42:55.5896077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:42:55.5896682Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:42:55.5897255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:42:55.5897812Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:42:55.5898282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:55.5898707Z return self.act(input) 2025-08-14T21:42:55.5898846Z 2025-08-14T21:42:55.5898949Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5899195Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5899458Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5899741Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5899978Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5900295Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5900537Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5900782Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5901059Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5901522Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5901937Z return mod(**inputs) 2025-08-14T21:42:55.5902417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5902930Z outputs = self.roberta( 2025-08-14T21:42:55.5903421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5903929Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5904420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5904927Z layer_outputs = layer_module( 2025-08-14T21:42:55.5905362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5905800Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5906310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:42:55.5906825Z self_attention_outputs = self.attention( 2025-08-14T21:42:55.5907301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5907806Z return func(*args, **kwargs) 2025-08-14T21:42:55.5908368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:42:55.5908869Z self_outputs = self.self( 2025-08-14T21:42:55.5909310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5909770Z return func(*args, **kwargs) 2025-08-14T21:42:55.5910308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:42:55.5910889Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:55.5911123Z 2025-08-14T21:42:55.5911220Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5911472Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5911760Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5912198Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5912600Z return mod(**inputs) 2025-08-14T21:42:55.5913080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5913595Z outputs = self.roberta( 2025-08-14T21:42:55.5914072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5914580Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5915079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5915584Z layer_outputs = layer_module( 2025-08-14T21:42:55.5916005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5916457Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5916970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:42:55.5917485Z layer_output = apply_chunking_to_forward( 2025-08-14T21:42:55.5918033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:42:55.5918535Z return forward_fn(*input_tensors) 2025-08-14T21:42:55.5919086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:42:55.5919685Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:42:55.5920254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:42:55.5920803Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:42:55.5921373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:55.5921797Z return self.act(input) 2025-08-14T21:42:55.5921945Z 2025-08-14T21:42:55.5922043Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5922358Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5928895Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5929145Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5929401Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5929643Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5929896Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5930148Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5930438Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5930882Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5931289Z return mod(**inputs) 2025-08-14T21:42:55.5931769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5932263Z outputs = self.roberta( 2025-08-14T21:42:55.5932738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5933243Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5933795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5934293Z layer_outputs = layer_module( 2025-08-14T21:42:55.5934723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5935172Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5935674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:42:55.5936192Z self_attention_outputs = self.attention( 2025-08-14T21:42:55.5936692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5937248Z return func(*args, **kwargs) 2025-08-14T21:42:55.5937730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:42:55.5938232Z self_outputs = self.self( 2025-08-14T21:42:55.5938679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5939140Z return func(*args, **kwargs) 2025-08-14T21:42:55.5939622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:42:55.5940201Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:55.5940434Z 2025-08-14T21:42:55.5940541Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5940786Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5941072Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5941521Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5941979Z return mod(**inputs) 2025-08-14T21:42:55.5942460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5942967Z outputs = self.roberta( 2025-08-14T21:42:55.5943448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5943950Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5944447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5944952Z layer_outputs = layer_module( 2025-08-14T21:42:55.5945385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5945827Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5946346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:42:55.5946871Z layer_output = apply_chunking_to_forward( 2025-08-14T21:42:55.5947369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:42:55.5947865Z return forward_fn(*input_tensors) 2025-08-14T21:42:55.5948409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:42:55.5949365Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:42:55.5949919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:42:55.5950481Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:42:55.5950970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:55.5951452Z return self.act(input) 2025-08-14T21:42:55.5955754Z 2025-08-14T21:42:55.5955952Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5956221Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5956476Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5956714Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5956968Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5957221Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5957456Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5957698Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5957984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5958431Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5958824Z return mod(**inputs) 2025-08-14T21:42:55.5959314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5959820Z outputs = self.roberta( 2025-08-14T21:42:55.5960300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5960808Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5961393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5961895Z layer_outputs = layer_module( 2025-08-14T21:42:55.5962316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5962763Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5963278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:42:55.5963871Z self_attention_outputs = self.attention( 2025-08-14T21:42:55.5964340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5964811Z return func(*args, **kwargs) 2025-08-14T21:42:55.5965297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:42:55.5965850Z self_outputs = self.self( 2025-08-14T21:42:55.5966374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5966840Z return func(*args, **kwargs) 2025-08-14T21:42:55.5967332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:42:55.5967905Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:55.5968147Z 2025-08-14T21:42:55.5968251Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5968510Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5968858Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5969317Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5969725Z return mod(**inputs) 2025-08-14T21:42:55.5970209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5970707Z outputs = self.roberta( 2025-08-14T21:42:55.5971188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5971699Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5972186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5972699Z layer_outputs = layer_module( 2025-08-14T21:42:55.5973139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5973653Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5974162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:42:55.5974682Z layer_output = apply_chunking_to_forward( 2025-08-14T21:42:55.5975192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:42:55.5975689Z return forward_fn(*input_tensors) 2025-08-14T21:42:55.5976225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:42:55.5976837Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:42:55.5977399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:42:55.5977960Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:42:55.5978430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:55.5978859Z return self.act(input) 2025-08-14T21:42:55.5979000Z 2025-08-14T21:42:55.5979104Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5979348Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5979592Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5979834Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5980068Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5980375Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5984853Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5985106Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5985380Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5985879Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5986291Z return mod(**inputs) 2025-08-14T21:42:55.5986766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5987277Z outputs = self.roberta( 2025-08-14T21:42:55.5987757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5988264Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5988753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5989259Z layer_outputs = layer_module( 2025-08-14T21:42:55.5989688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.5990139Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.5990655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:42:55.5991179Z self_attention_outputs = self.attention( 2025-08-14T21:42:55.5991661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5992122Z return func(*args, **kwargs) 2025-08-14T21:42:55.5992610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:42:55.5993111Z self_outputs = self.self( 2025-08-14T21:42:55.5993555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.5994022Z return func(*args, **kwargs) 2025-08-14T21:42:55.5994508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:42:55.5995214Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:55.5995447Z 2025-08-14T21:42:55.5995592Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5995849Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.5996133Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.5996583Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.5996978Z return mod(**inputs) 2025-08-14T21:42:55.5997458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.5997965Z outputs = self.roberta( 2025-08-14T21:42:55.5998439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.5998951Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.5999453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.5999961Z layer_outputs = layer_module( 2025-08-14T21:42:55.6000398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.6000849Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.6001438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:42:55.6001950Z layer_output = apply_chunking_to_forward( 2025-08-14T21:42:55.6002457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:42:55.6002956Z return forward_fn(*input_tensors) 2025-08-14T21:42:55.6003501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:42:55.6004146Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:42:55.6004716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:42:55.6005272Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:42:55.6005743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:55.6006159Z return self.act(input) 2025-08-14T21:42:55.6006307Z 2025-08-14T21:42:55.6006402Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6006657Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6006898Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6007141Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6007392Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6007631Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6007876Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6008120Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6008402Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.6008840Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.6009288Z return mod(**inputs) 2025-08-14T21:42:55.6014009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.6014513Z outputs = self.roberta( 2025-08-14T21:42:55.6014990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.6015494Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.6016002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.6016506Z layer_outputs = layer_module( 2025-08-14T21:42:55.6016991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.6017451Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.6017991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:42:55.6018510Z self_attention_outputs = self.attention( 2025-08-14T21:42:55.6018986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.6019452Z return func(*args, **kwargs) 2025-08-14T21:42:55.6019943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:42:55.6020452Z self_outputs = self.self( 2025-08-14T21:42:55.6020905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.6021367Z return func(*args, **kwargs) 2025-08-14T21:42:55.6021851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:42:55.6022432Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:55.6022664Z 2025-08-14T21:42:55.6022771Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6023016Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6023303Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.6023804Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.6024285Z return mod(**inputs) 2025-08-14T21:42:55.6024757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.6026262Z outputs = self.roberta( 2025-08-14T21:42:55.6026768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.6027282Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.6027790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.6028297Z layer_outputs = layer_module( 2025-08-14T21:42:55.6028725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.6029166Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.6029677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:42:55.6030195Z layer_output = apply_chunking_to_forward( 2025-08-14T21:42:55.6030686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:42:55.6031184Z return forward_fn(*input_tensors) 2025-08-14T21:42:55.6031723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:42:55.6032325Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:42:55.6032880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:42:55.6033435Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:42:55.6033904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:55.6034322Z return self.act(input) 2025-08-14T21:42:55.6034460Z 2025-08-14T21:42:55.6034557Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6034817Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6035061Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6035298Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6035584Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6035827Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6036064Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6036308Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6036586Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.6037034Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.6037431Z return mod(**inputs) 2025-08-14T21:42:55.6037912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.6038495Z outputs = self.roberta( 2025-08-14T21:42:55.6047559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.6048232Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.6049267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.6049883Z layer_outputs = layer_module( 2025-08-14T21:42:55.6050311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.6050768Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.6051280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:42:55.6051801Z self_attention_outputs = self.attention( 2025-08-14T21:42:55.6052279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.6052794Z return func(*args, **kwargs) 2025-08-14T21:42:55.6055442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:42:55.6055944Z self_outputs = self.self( 2025-08-14T21:42:55.6056400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.6056873Z return func(*args, **kwargs) 2025-08-14T21:42:55.6057361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:42:55.6057937Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:55.6058186Z 2025-08-14T21:42:55.6058284Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6058543Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6058834Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.6059321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.6059730Z return mod(**inputs) 2025-08-14T21:42:55.6060215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.6060713Z outputs = self.roberta( 2025-08-14T21:42:55.6061192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.6061698Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.6062189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.6062688Z layer_outputs = layer_module( 2025-08-14T21:42:55.6063116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.6063570Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.6064075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:42:55.6064596Z layer_output = apply_chunking_to_forward( 2025-08-14T21:42:55.6065183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:42:55.6065678Z return forward_fn(*input_tensors) 2025-08-14T21:42:55.6066210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:42:55.6066813Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:42:55.6067431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:42:55.6068049Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:42:55.6068517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:55.6068944Z return self.act(input) 2025-08-14T21:42:55.6069082Z 2025-08-14T21:42:55.6069185Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6069434Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6069683Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6069929Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6070163Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6070407Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6070656Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6070892Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6071172Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.6071620Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.6072025Z return mod(**inputs) 2025-08-14T21:42:55.6072495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.6073053Z outputs = self.roberta( 2025-08-14T21:42:55.6073535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.6074050Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.6074540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.6075045Z layer_outputs = layer_module( 2025-08-14T21:42:55.6075481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.6075921Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.6076432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:42:55.6078427Z self_attention_outputs = self.attention( 2025-08-14T21:42:55.6078951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.6079430Z return func(*args, **kwargs) 2025-08-14T21:42:55.6093311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:42:55.6093900Z self_outputs = self.self( 2025-08-14T21:42:55.6094371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.6094853Z return func(*args, **kwargs) 2025-08-14T21:42:55.6095357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:42:55.6095941Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:55.6096221Z 2025-08-14T21:42:55.6096378Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6096769Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6097069Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.6097645Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.6098061Z return mod(**inputs) 2025-08-14T21:42:55.6098557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.6099069Z outputs = self.roberta( 2025-08-14T21:42:55.6099564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.6100081Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.6100591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.6101094Z layer_outputs = layer_module( 2025-08-14T21:42:55.6101541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.6101999Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.6102512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:42:55.6103039Z layer_output = apply_chunking_to_forward( 2025-08-14T21:42:55.6103553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:42:55.6104055Z return forward_fn(*input_tensors) 2025-08-14T21:42:55.6104595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:42:55.6105202Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:42:55.6105768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:42:55.6106379Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:42:55.6106854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:55.6107286Z return self.act(input) 2025-08-14T21:42:55.6107429Z 2025-08-14T21:42:55.6107540Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6107794Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6108049Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6108301Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6108550Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6108788Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6109036Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6109283Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6109562Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.6110025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.6110438Z return mod(**inputs) 2025-08-14T21:42:55.6110984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.6115737Z outputs = self.roberta( 2025-08-14T21:42:55.6116232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.6116740Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.6117238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.6117750Z layer_outputs = layer_module( 2025-08-14T21:42:55.6118192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.6118657Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.6119169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:42:55.6119755Z self_attention_outputs = self.attention( 2025-08-14T21:42:55.6120240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.6120702Z return func(*args, **kwargs) 2025-08-14T21:42:55.6121199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:42:55.6121798Z self_outputs = self.self( 2025-08-14T21:42:55.6122249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:42:55.6122713Z return func(*args, **kwargs) 2025-08-14T21:42:55.6123207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:42:55.6123797Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:55.6124034Z 2025-08-14T21:42:55.6124152Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6124402Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6124697Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.6125169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.6125680Z return mod(**inputs) 2025-08-14T21:42:55.6126170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:42:55.6126679Z outputs = self.roberta( 2025-08-14T21:42:55.6127159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:42:55.6127668Z encoder_outputs = self.encoder( 2025-08-14T21:42:55.6128226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:42:55.6128737Z layer_outputs = layer_module( 2025-08-14T21:42:55.6129165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:55.6129618Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:55.6130139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:42:55.6130662Z layer_output = apply_chunking_to_forward( 2025-08-14T21:42:55.6131162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:42:55.6131661Z return forward_fn(*input_tensors) 2025-08-14T21:42:55.6132209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:42:55.6132819Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:42:55.6133394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:42:55.6133953Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:42:55.6134437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:55.6134855Z return self.act(input) 2025-08-14T21:42:55.6135004Z 2025-08-14T21:42:55.6135104Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6135367Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6135615Z cudagraph partition due to non gpu ops 2025-08-14T21:42:55.6135905Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:55.6136357Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:55.6136776Z return mod(**inputs) 2025-08-14T21:42:55.6137297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1059, in forward 2025-08-14T21:42:55.6137956Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:42:55.6138260Z 2025-08-14T21:43:02.2838274Z Compilation time (from dynamo_timed): 19.38785215 2025-08-14T21:43:02.2935763Z pass 2025-08-14T21:43:02.2936506Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:43:02.2937501Z TIMING: _recursive_pre_grad_passes:0.04955 _recursive_joint_graph_passes:0.53989 _recursive_post_grad_passes:0.10937 async_compile.wait:0.93741 code_gen:5.90688 inductor_compile:9.61626 backend_compile:15.70919 gc:0.00071 entire_frame_compile:19.38785 total_wall_time:19.38785 2025-08-14T21:43:02.2938711Z STATS: call_* op count: 297 | FakeTensorMode.__torch_dispatch__:24279 | FakeTensor.__torch_dispatch__:3917 | ProxyTorchDispatchMode.__torch_dispatch__:5350 2025-08-14T21:43:02.2939363Z Dynamo produced 1 graphs covering 297 ops with 0 graph breaks (0 unique) 2025-08-14T21:43:08.5781387Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:43:08.5782464Z from pkg_resources import resource_filename 2025-08-14T21:43:09.3971120Z 2025-08-14T21:43:23.5520112Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:43:23.5520461Z loading model: 0it [00:14, ?it/s] 2025-08-14T21:43:23.5561453Z cpu eval DebertaV2ForMaskedLM 2025-08-14T21:43:23.7367729Z Compilation time (from dynamo_timed): 0 2025-08-14T21:43:23.7368076Z pass_due_to_skip 2025-08-14T21:43:23.7377181Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:43:23.7377650Z TIMING: total_wall_time:0 2025-08-14T21:43:23.7377869Z STATS: call_* op count: 0 2025-08-14T21:43:23.7378203Z Dynamo produced 0 graphs covering 0 ops with 0 graph breaks (0 unique) 2025-08-14T21:43:29.3883717Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:43:29.3884813Z from pkg_resources import resource_filename 2025-08-14T21:43:30.1187854Z 2025-08-14T21:43:41.6697771Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:43:41.6698280Z loading model: 0it [00:11, ?it/s] 2025-08-14T21:43:41.6737485Z cpu eval DebertaV2ForQuestionAnswering 2025-08-14T21:43:46.5518171Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:43:48.9788845Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:43:51.1760050Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:19.7441364Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7441983Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7442539Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7443405Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7448478Z return mod(**inputs) 2025-08-14T21:44:19.7449670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7450655Z outputs = self.deberta( 2025-08-14T21:44:19.7451542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7452558Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7453933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7454931Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7455739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7456588Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7457511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7458511Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7459671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7460673Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7461637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.7462899Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.7464293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.7465516Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.7465964Z 2025-08-14T21:44:19.7466159Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7466620Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7467033Z return mod(**inputs) 2025-08-14T21:44:19.7467781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7468974Z outputs = self.deberta( 2025-08-14T21:44:19.7469868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7470746Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7471680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7472672Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7477574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7478087Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7478595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7479138Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7479666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7480178Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7480683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.7481494Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.7481870Z 2025-08-14T21:44:19.7482095Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7482862Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7483586Z return mod(**inputs) 2025-08-14T21:44:19.7484482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7485422Z outputs = self.deberta( 2025-08-14T21:44:19.7486447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7487401Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7488477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7489427Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7490248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7491089Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7492030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7493020Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7494025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7495019Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7495951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.7497216Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.7497821Z 2025-08-14T21:44:19.7498006Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7498510Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7499319Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7500066Z return mod(**inputs) 2025-08-14T21:44:19.7500970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7502031Z outputs = self.deberta( 2025-08-14T21:44:19.7506935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7507457Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7507962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7508490Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7509179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7509973Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7510926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7511913Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7512937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7513894Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7514792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.7516018Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.7517306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.7517947Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.7518189Z 2025-08-14T21:44:19.7518334Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7518789Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7519210Z return mod(**inputs) 2025-08-14T21:44:19.7519796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7520308Z outputs = self.deberta( 2025-08-14T21:44:19.7520787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7521357Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7521859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7522380Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7522843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7523303Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7524183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7525161Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7526193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7527154Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7528078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.7529011Z context_layer = torch.bmm( 2025-08-14T21:44:19.7529275Z 2025-08-14T21:44:19.7529509Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7530307Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7531070Z return mod(**inputs) 2025-08-14T21:44:19.7540481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7541561Z outputs = self.deberta( 2025-08-14T21:44:19.7542473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7543402Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7544357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7545333Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7548116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7548566Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7549353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7549893Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7550610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7551541Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7552495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.7553715Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.7554285Z 2025-08-14T21:44:19.7554468Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7554954Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7555950Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7556802Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7557532Z return mod(**inputs) 2025-08-14T21:44:19.7558392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7559540Z outputs = self.deberta( 2025-08-14T21:44:19.7560564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7561645Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7562616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7563639Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7564506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7565331Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7566255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.7567366Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.7568430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.7569453Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.7570330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.7571122Z return self.act(input) 2025-08-14T21:44:19.7571383Z 2025-08-14T21:44:19.7571559Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7572021Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7572468Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7572984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7573820Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7578956Z return mod(**inputs) 2025-08-14T21:44:19.7579719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7580665Z outputs = self.deberta( 2025-08-14T21:44:19.7581552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7582470Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7583362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7584365Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7585204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7586064Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7586968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7587978Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7588944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7590034Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7591026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.7592307Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.7593674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.7594879Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.7595345Z 2025-08-14T21:44:19.7595588Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7596545Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7597313Z return mod(**inputs) 2025-08-14T21:44:19.7597845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7598770Z outputs = self.deberta( 2025-08-14T21:44:19.7599644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7600589Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7601540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7602559Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7603411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7608279Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7609229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7610211Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7611242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7612226Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7613179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.7614475Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.7615085Z 2025-08-14T21:44:19.7615418Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7616134Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7616791Z return mod(**inputs) 2025-08-14T21:44:19.7617580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7618621Z outputs = self.deberta( 2025-08-14T21:44:19.7619537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7620508Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7621476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7622487Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7623367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7624245Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7625245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7626324Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7627340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7628229Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7629234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.7630564Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.7631236Z 2025-08-14T21:44:19.7631420Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7631981Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7637067Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7637636Z return mod(**inputs) 2025-08-14T21:44:19.7638137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7638640Z outputs = self.deberta( 2025-08-14T21:44:19.7639132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7639641Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7640149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7640665Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7641126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7642015Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7643013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7644032Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7645062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7646066Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7647044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.7648055Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.7649057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.7649833Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.7650250Z 2025-08-14T21:44:19.7650462Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7651217Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7652000Z return mod(**inputs) 2025-08-14T21:44:19.7652897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7653875Z outputs = self.deberta( 2025-08-14T21:44:19.7654803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7655790Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7656764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7657785Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7658663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7659527Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7660515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7661549Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7666887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7667846Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7668773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.7669719Z context_layer = torch.bmm( 2025-08-14T21:44:19.7669967Z 2025-08-14T21:44:19.7670189Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7671155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7671874Z return mod(**inputs) 2025-08-14T21:44:19.7672741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7673662Z outputs = self.deberta( 2025-08-14T21:44:19.7674566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7675635Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7676742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7677290Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7677757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7678214Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7678734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7679260Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7679788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7680296Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7680928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.7682250Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.7682996Z 2025-08-14T21:44:19.7683170Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7683633Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7684156Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7684979Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7685747Z return mod(**inputs) 2025-08-14T21:44:19.7686683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7687714Z outputs = self.deberta( 2025-08-14T21:44:19.7688683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7689713Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7690602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7700413Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7701297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7702140Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7703090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.7704193Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.7707391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.7707976Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.7708455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.7708893Z return self.act(input) 2025-08-14T21:44:19.7709060Z 2025-08-14T21:44:19.7709171Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7709437Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7709679Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7710300Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7711120Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7711867Z return mod(**inputs) 2025-08-14T21:44:19.7712760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7713705Z outputs = self.deberta( 2025-08-14T21:44:19.7714643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7715626Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7716581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7717597Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7718422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7719255Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7720406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7721508Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7722514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7723505Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7724491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.7725898Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.7727246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.7728442Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.7728881Z 2025-08-14T21:44:19.7729129Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7729982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7730767Z return mod(**inputs) 2025-08-14T21:44:19.7731671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7732652Z outputs = self.deberta( 2025-08-14T21:44:19.7733548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7738696Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7739662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7740660Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7741526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7742395Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7743372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7744383Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7745385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7746339Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7747333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.7749275Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.7749987Z 2025-08-14T21:44:19.7750233Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7751049Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7751795Z return mod(**inputs) 2025-08-14T21:44:19.7752701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7753649Z outputs = self.deberta( 2025-08-14T21:44:19.7754557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7755546Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7756513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7757530Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7758387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7759234Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7760206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7761195Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7762308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7767467Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7768667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.7769927Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.7770519Z 2025-08-14T21:44:19.7770707Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7771210Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7772054Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7772781Z return mod(**inputs) 2025-08-14T21:44:19.7773672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7774624Z outputs = self.deberta( 2025-08-14T21:44:19.7775566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7776580Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7777524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7778717Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7779582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7780439Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7781412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7782416Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7783453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7784408Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7785407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.7786885Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.7788240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.7789407Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.7789861Z 2025-08-14T21:44:19.7790095Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7790961Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7791710Z return mod(**inputs) 2025-08-14T21:44:19.7796759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7797308Z outputs = self.deberta( 2025-08-14T21:44:19.7797801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7798315Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7798818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7799347Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7799809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7800261Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7800773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7801805Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7802877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7803833Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7804818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.7805788Z context_layer = torch.bmm( 2025-08-14T21:44:19.7806063Z 2025-08-14T21:44:19.7806301Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7807253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7807664Z return mod(**inputs) 2025-08-14T21:44:19.7808150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7808650Z outputs = self.deberta( 2025-08-14T21:44:19.7809135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7810075Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7811015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7812003Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7812847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7813689Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7814668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7815649Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7816642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7817607Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7818579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.7819942Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.7820535Z 2025-08-14T21:44:19.7820720Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7825575Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7826189Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7827041Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7827830Z return mod(**inputs) 2025-08-14T21:44:19.7828717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7829669Z outputs = self.deberta( 2025-08-14T21:44:19.7830607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7831563Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7832505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7833535Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7834415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7835271Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7836299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.7836882Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.7837450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.7838128Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.7838613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.7839047Z return self.act(input) 2025-08-14T21:44:19.7839188Z 2025-08-14T21:44:19.7839288Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7839548Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7839798Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7840087Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7840768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7841592Z return mod(**inputs) 2025-08-14T21:44:19.7842476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7843460Z outputs = self.deberta( 2025-08-14T21:44:19.7844372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7845353Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7846312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7847315Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7848186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7849460Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7858679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7859758Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7860769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7861778Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7862903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.7864167Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.7867508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.7868165Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.7868409Z 2025-08-14T21:44:19.7868546Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7869003Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7869684Z return mod(**inputs) 2025-08-14T21:44:19.7870579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7871518Z outputs = self.deberta( 2025-08-14T21:44:19.7872436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7873419Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7874374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7875385Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7876236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7877093Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7878048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7879391Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7880429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7881524Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7882474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.7883799Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.7884447Z 2025-08-14T21:44:19.7884697Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7885523Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7886273Z return mod(**inputs) 2025-08-14T21:44:19.7887224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7888211Z outputs = self.deberta( 2025-08-14T21:44:19.7889125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7890094Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7891035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7892069Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7892923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7898040Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7898898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7899928Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7901091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7902092Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7903093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.7904372Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.7905030Z 2025-08-14T21:44:19.7905217Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7905768Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7906635Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7907388Z return mod(**inputs) 2025-08-14T21:44:19.7908504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7909443Z outputs = self.deberta( 2025-08-14T21:44:19.7910353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7911347Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7912307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7913324Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7914179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7915032Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7915994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7917156Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7918170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7919158Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7920096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.7921438Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.7927013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.7928042Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.7928485Z 2025-08-14T21:44:19.7928724Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7929563Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7930349Z return mod(**inputs) 2025-08-14T21:44:19.7931263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7932254Z outputs = self.deberta( 2025-08-14T21:44:19.7933159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7934138Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7935062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7936052Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7936917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7937934Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7939048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7940045Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7941075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7942042Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7942998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.7943972Z context_layer = torch.bmm( 2025-08-14T21:44:19.7944249Z 2025-08-14T21:44:19.7944488Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7945345Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7946104Z return mod(**inputs) 2025-08-14T21:44:19.7947039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7947978Z outputs = self.deberta( 2025-08-14T21:44:19.7949342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7950321Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7951280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7956203Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7956681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7957133Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7957833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.7958366Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.7958899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.7959420Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.7959925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.7960890Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.7961539Z 2025-08-14T21:44:19.7961714Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7962212Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7962737Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.7986717Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.7987519Z return mod(**inputs) 2025-08-14T21:44:19.7988446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.7989425Z outputs = self.deberta( 2025-08-14T21:44:19.7990363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.7991351Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.7992314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.7993320Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.7994200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.7995038Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.7995969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.7996727Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.7997311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.7997866Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.7998354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.7998791Z return self.act(input) 2025-08-14T21:44:19.7998940Z 2025-08-14T21:44:19.7999055Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7999316Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7999571Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.7999968Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8000779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8001597Z return mod(**inputs) 2025-08-14T21:44:19.8002536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8003494Z outputs = self.deberta( 2025-08-14T21:44:19.8004398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8005376Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8006330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8007311Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8008163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8009177Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8018672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8019787Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8020799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8021809Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8022752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8024012Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8027010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8027669Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8027919Z 2025-08-14T21:44:19.8028068Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8028522Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8029145Z return mod(**inputs) 2025-08-14T21:44:19.8030055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8031027Z outputs = self.deberta( 2025-08-14T21:44:19.8031931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8032905Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8033859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8034854Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8035849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8036726Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8037670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8038859Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8039866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8040859Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8041906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8043227Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8043893Z 2025-08-14T21:44:19.8044138Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8045010Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8045770Z return mod(**inputs) 2025-08-14T21:44:19.8046679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8047660Z outputs = self.deberta( 2025-08-14T21:44:19.8048572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8050081Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8050652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8051205Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8051839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8052310Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8052841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8053469Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8054061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8054578Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8055097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8055779Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8056141Z 2025-08-14T21:44:19.8056246Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8056546Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8057014Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8057417Z return mod(**inputs) 2025-08-14T21:44:19.8057907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8058421Z outputs = self.deberta( 2025-08-14T21:44:19.8058911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8059410Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8059913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8060442Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8060897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8061416Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8061930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8062465Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8062995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8063511Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8064023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8064696Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8065399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8066051Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8066294Z 2025-08-14T21:44:19.8066450Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8066897Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8067302Z return mod(**inputs) 2025-08-14T21:44:19.8072005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8072513Z outputs = self.deberta( 2025-08-14T21:44:19.8073008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8073519Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8074071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8074650Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8075111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8075565Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8076070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8076601Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8077133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8077642Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8078138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8078642Z context_layer = torch.bmm( 2025-08-14T21:44:19.8078790Z 2025-08-14T21:44:19.8078929Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8079381Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8079781Z return mod(**inputs) 2025-08-14T21:44:19.8080259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8080762Z outputs = self.deberta( 2025-08-14T21:44:19.8081297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8081822Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8082397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8082981Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8084652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8085114Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8085627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8086160Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8086678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8087188Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8087690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8088341Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8088660Z 2025-08-14T21:44:19.8088762Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8089030Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8089324Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8089763Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8090171Z return mod(**inputs) 2025-08-14T21:44:19.8090654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8091148Z outputs = self.deberta( 2025-08-14T21:44:19.8095750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8096261Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8101149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8101792Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8102265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8102717Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8103221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8103779Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8104352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8104923Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8105401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8105823Z return self.act(input) 2025-08-14T21:44:19.8105963Z 2025-08-14T21:44:19.8106066Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8106318Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8106567Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8106849Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8107292Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8107688Z return mod(**inputs) 2025-08-14T21:44:19.8108168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8108669Z outputs = self.deberta( 2025-08-14T21:44:19.8109139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8109639Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8110136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8110724Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8111258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8111762Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8112268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8112797Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8113321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8113824Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8114325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8114981Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8115682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8116305Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8116549Z 2025-08-14T21:44:19.8116682Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8117123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8117585Z return mod(**inputs) 2025-08-14T21:44:19.8118054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8118554Z outputs = self.deberta( 2025-08-14T21:44:19.8119055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8119556Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8120051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8120571Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8121026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8121555Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8122063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8122606Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8123138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8123640Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8124149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8124830Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8125168Z 2025-08-14T21:44:19.8125307Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8129923Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8130374Z return mod(**inputs) 2025-08-14T21:44:19.8130853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8131348Z outputs = self.deberta( 2025-08-14T21:44:19.8131825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8132331Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8132877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8133388Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8133837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8134285Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8134788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8135306Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8135832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8136338Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8136833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8137510Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8137852Z 2025-08-14T21:44:19.8137952Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8138241Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8138678Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8139079Z return mod(**inputs) 2025-08-14T21:44:19.8139604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8140105Z outputs = self.deberta( 2025-08-14T21:44:19.8140698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8141223Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8141716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8142224Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8142677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8143125Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8143627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8144148Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8144667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8145180Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8145676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8146332Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8147028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8147648Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8147887Z 2025-08-14T21:44:19.8148021Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8148462Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8149246Z return mod(**inputs) 2025-08-14T21:44:19.8149731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8150234Z outputs = self.deberta( 2025-08-14T21:44:19.8150793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8151295Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8151788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8152299Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8152751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8153200Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8153699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8154233Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8163045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8163732Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8164395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8165069Z context_layer = torch.bmm( 2025-08-14T21:44:19.8165247Z 2025-08-14T21:44:19.8165402Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8165849Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8166305Z return mod(**inputs) 2025-08-14T21:44:19.8166780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8167310Z outputs = self.deberta( 2025-08-14T21:44:19.8167775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8168278Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8168775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8169361Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8169866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8170315Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8170825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8171347Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8171867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8172376Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8172881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8173524Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8173837Z 2025-08-14T21:44:19.8173937Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8174195Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8174476Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8174912Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8175323Z return mod(**inputs) 2025-08-14T21:44:19.8175796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8176290Z outputs = self.deberta( 2025-08-14T21:44:19.8176803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8177302Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8177794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8178301Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8178754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8179196Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8179703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8180256Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8180818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8181364Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8181831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8182249Z return self.act(input) 2025-08-14T21:44:19.8182391Z 2025-08-14T21:44:19.8182489Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8182740Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8182978Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8183256Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8183803Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8184249Z return mod(**inputs) 2025-08-14T21:44:19.8184735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8185276Z outputs = self.deberta( 2025-08-14T21:44:19.8185753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8186249Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8186739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8187259Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8187706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8188154Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8188699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8189227Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8189742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8190244Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8190748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8191393Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8192079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8192753Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8192988Z 2025-08-14T21:44:19.8193127Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8193577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8193979Z return mod(**inputs) 2025-08-14T21:44:19.8194511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8195023Z outputs = self.deberta( 2025-08-14T21:44:19.8195496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8196003Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8196502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8197025Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8197473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8197921Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8204833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8205364Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8205897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8206409Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8206915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8207586Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8207964Z 2025-08-14T21:44:19.8208094Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8208540Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8208964Z return mod(**inputs) 2025-08-14T21:44:19.8209442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8209943Z outputs = self.deberta( 2025-08-14T21:44:19.8210414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8210916Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8211403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8211921Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8212376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8212892Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8213458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8213993Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8214515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8215021Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8215526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8216201Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8216542Z 2025-08-14T21:44:19.8216648Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8216933Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8217386Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8217800Z return mod(**inputs) 2025-08-14T21:44:19.8218318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8218822Z outputs = self.deberta( 2025-08-14T21:44:19.8219303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8219806Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8220297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8220816Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8221276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8221721Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8222230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8222772Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8223295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8223797Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8224300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8224952Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8225683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8226296Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8226561Z 2025-08-14T21:44:19.8226693Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8227140Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8231807Z return mod(**inputs) 2025-08-14T21:44:19.8232296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8232804Z outputs = self.deberta( 2025-08-14T21:44:19.8233283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8233784Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8234290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8234816Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8235276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8235719Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8236229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8236762Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8237286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8237787Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8238288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8238789Z context_layer = torch.bmm( 2025-08-14T21:44:19.8238935Z 2025-08-14T21:44:19.8239064Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8239514Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8239916Z return mod(**inputs) 2025-08-14T21:44:19.8240443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8240940Z outputs = self.deberta( 2025-08-14T21:44:19.8241493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8242085Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8242619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8243146Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8243600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8244053Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8244562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8245092Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8245619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8246126Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8246622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8247274Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8247612Z 2025-08-14T21:44:19.8247721Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8247970Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8248302Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8249073Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8249488Z return mod(**inputs) 2025-08-14T21:44:19.8249966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8250482Z outputs = self.deberta( 2025-08-14T21:44:19.8250974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8251486Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8251974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8252511Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8252980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8253429Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8253946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8254511Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8255075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8255621Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8256095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8260703Z return self.act(input) 2025-08-14T21:44:19.8260877Z 2025-08-14T21:44:19.8260983Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8261232Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8261489Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8261770Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8262294Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8262707Z return mod(**inputs) 2025-08-14T21:44:19.8263193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8263691Z outputs = self.deberta( 2025-08-14T21:44:19.8264168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8264668Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8265167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8265684Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8266147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8266603Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8267117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8267640Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8268170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8268676Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8269174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8269872Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8270599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8271351Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8271586Z 2025-08-14T21:44:19.8271716Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8272165Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8272571Z return mod(**inputs) 2025-08-14T21:44:19.8273048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8273547Z outputs = self.deberta( 2025-08-14T21:44:19.8274026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8274526Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8275023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8275548Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8276006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8276458Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8276953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8277485Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8278012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8278522Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8279024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8279703Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8280041Z 2025-08-14T21:44:19.8280231Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8280677Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8281074Z return mod(**inputs) 2025-08-14T21:44:19.8281634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8282134Z outputs = self.deberta( 2025-08-14T21:44:19.8282605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8283111Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8283603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8284124Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8284573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8285026Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8289811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8290356Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8290877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8291419Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8291917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8292612Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8292952Z 2025-08-14T21:44:19.8293054Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8293347Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8293791Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8294186Z return mod(**inputs) 2025-08-14T21:44:19.8294664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8295161Z outputs = self.deberta( 2025-08-14T21:44:19.8295627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8296132Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8296620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8297144Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8297596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8298044Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8298550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8299078Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8299594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8300220Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8300728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8301377Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8302119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8302742Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8302975Z 2025-08-14T21:44:19.8303112Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8303546Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8303947Z return mod(**inputs) 2025-08-14T21:44:19.8304430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8304938Z outputs = self.deberta( 2025-08-14T21:44:19.8305410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8305911Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8306407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8306933Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8307388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8307846Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8308358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8308910Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8309432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8309956Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8310457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8310957Z context_layer = torch.bmm( 2025-08-14T21:44:19.8311109Z 2025-08-14T21:44:19.8311238Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8311683Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8312076Z return mod(**inputs) 2025-08-14T21:44:19.8312547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8313050Z outputs = self.deberta( 2025-08-14T21:44:19.8313526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8314017Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8323075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8323799Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8324283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8324725Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8325236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8325760Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8326277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8326797Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8327300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8327951Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8328313Z 2025-08-14T21:44:19.8328415Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8328739Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8329025Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8329529Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8329925Z return mod(**inputs) 2025-08-14T21:44:19.8330407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8330917Z outputs = self.deberta( 2025-08-14T21:44:19.8331389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8331899Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8332398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8332921Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8333372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8333818Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8334160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8334317Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8334690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8334835Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8335130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8335221Z return self.act(input) 2025-08-14T21:44:19.8335237Z 2025-08-14T21:44:19.8335344Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8335437Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8335534Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8335679Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8335932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8336021Z return mod(**inputs) 2025-08-14T21:44:19.8336375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8336466Z outputs = self.deberta( 2025-08-14T21:44:19.8336818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8336910Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8337255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8337369Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8337650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8337752Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8338094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8338208Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8338549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8338648Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8338993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8339286Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8339679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8339851Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8339864Z 2025-08-14T21:44:19.8339993Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8340247Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8340327Z return mod(**inputs) 2025-08-14T21:44:19.8340676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8340775Z outputs = self.deberta( 2025-08-14T21:44:19.8341120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8341206Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8341551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8341656Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8341940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8342038Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8342407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8342528Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8342892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8342997Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8343420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8343740Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8343754Z 2025-08-14T21:44:19.8343889Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8344140Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8344228Z return mod(**inputs) 2025-08-14T21:44:19.8344586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8344674Z outputs = self.deberta( 2025-08-14T21:44:19.8345025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8345120Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8345458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8345578Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8345857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8345961Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8346302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8346421Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8346763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8346857Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8347241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8347512Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8347524Z 2025-08-14T21:44:19.8347622Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8347751Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8348004Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8348090Z return mod(**inputs) 2025-08-14T21:44:19.8348443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8348530Z outputs = self.deberta( 2025-08-14T21:44:19.8349229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8349327Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8349671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8349791Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8350068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8350168Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8350564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8350679Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8351064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8351165Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8351507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8351753Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8352145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8352311Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8352326Z 2025-08-14T21:44:19.8352451Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8352701Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8352797Z return mod(**inputs) 2025-08-14T21:44:19.8353151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8353245Z outputs = self.deberta( 2025-08-14T21:44:19.8353588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8353678Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8354021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8354127Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8354411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8354513Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8354853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8354975Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8355364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8355459Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8355804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8355889Z context_layer = torch.bmm( 2025-08-14T21:44:19.8355902Z 2025-08-14T21:44:19.8356034Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8356283Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8356360Z return mod(**inputs) 2025-08-14T21:44:19.8356713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8356800Z outputs = self.deberta( 2025-08-14T21:44:19.8357143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8357243Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8357585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8363931Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8364216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8364395Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8364754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8364892Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8365237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8365338Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8365682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8365930Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8365943Z 2025-08-14T21:44:19.8366042Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8366137Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8366272Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8366518Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8366605Z return mod(**inputs) 2025-08-14T21:44:19.8366952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8367034Z outputs = self.deberta( 2025-08-14T21:44:19.8367386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8367473Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8367809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8367921Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8368194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8368298Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8368643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8368790Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8369175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8369317Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8369595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8369683Z return self.act(input) 2025-08-14T21:44:19.8369696Z 2025-08-14T21:44:19.8369790Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8369891Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8369987Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8370113Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8370374Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8370459Z return mod(**inputs) 2025-08-14T21:44:19.8370814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8370902Z outputs = self.deberta( 2025-08-14T21:44:19.8371243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8371337Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8371677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8371783Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8372092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8372259Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8372638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8372803Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8373149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8373252Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8373591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8373833Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8374226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8374396Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8374411Z 2025-08-14T21:44:19.8374547Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8374797Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8374889Z return mod(**inputs) 2025-08-14T21:44:19.8375237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8375323Z outputs = self.deberta( 2025-08-14T21:44:19.8375671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8375766Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8376106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8376227Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8376504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8376612Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8377006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8377122Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8377465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8377563Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8377911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8378184Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8378197Z 2025-08-14T21:44:19.8378327Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8378581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8378661Z return mod(**inputs) 2025-08-14T21:44:19.8379012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8379103Z outputs = self.deberta( 2025-08-14T21:44:19.8379447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8379545Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8379884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8380013Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8380293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8380414Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8380765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8380879Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8381215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8381311Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8381647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8381907Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8381924Z 2025-08-14T21:44:19.8382020Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8382146Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8382401Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8382486Z return mod(**inputs) 2025-08-14T21:44:19.8382828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8382917Z outputs = self.deberta( 2025-08-14T21:44:19.8383262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8383362Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8383700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8383809Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8384088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8384188Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8384571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8384691Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8385025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8385125Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8385463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8385707Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8386104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8386268Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8386281Z 2025-08-14T21:44:19.8386418Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8390906Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8390997Z return mod(**inputs) 2025-08-14T21:44:19.8391404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8391497Z outputs = self.deberta( 2025-08-14T21:44:19.8391847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8391981Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8392324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8392461Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8392744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8392841Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8393192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8393305Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8393651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8393749Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8394090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8394188Z context_layer = torch.bmm( 2025-08-14T21:44:19.8394201Z 2025-08-14T21:44:19.8394327Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8394589Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8394675Z return mod(**inputs) 2025-08-14T21:44:19.8395023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8395110Z outputs = self.deberta( 2025-08-14T21:44:19.8395447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8395536Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8395887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8395994Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8396283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8396379Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8396755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8396881Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8397218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8397314Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8397656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8397895Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8397908Z 2025-08-14T21:44:19.8398014Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8398110Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8398235Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8398490Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8398571Z return mod(**inputs) 2025-08-14T21:44:19.8398926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8399013Z outputs = self.deberta( 2025-08-14T21:44:19.8399355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8399479Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8399821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8399948Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8400234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8400334Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8400677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8400826Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8401296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8401475Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8401803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8401894Z return self.act(input) 2025-08-14T21:44:19.8401907Z 2025-08-14T21:44:19.8402004Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8402096Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8402199Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8402329Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8402574Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8402659Z return mod(**inputs) 2025-08-14T21:44:19.8403010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8403105Z outputs = self.deberta( 2025-08-14T21:44:19.8403442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8403535Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8403886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8403993Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8404320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8404423Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8404761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8404881Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8405222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8405321Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8405674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8405913Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8406310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8406473Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8406486Z 2025-08-14T21:44:19.8406611Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8406865Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8406946Z return mod(**inputs) 2025-08-14T21:44:19.8407300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8407407Z outputs = self.deberta( 2025-08-14T21:44:19.8407748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8407878Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8408223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8408331Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8408614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8408712Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8409055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8409168Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8409510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8409610Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8409954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8410227Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8410240Z 2025-08-14T21:44:19.8410368Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8410612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8410702Z return mod(**inputs) 2025-08-14T21:44:19.8411053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8411140Z outputs = self.deberta( 2025-08-14T21:44:19.8411488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8411577Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8411923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8412068Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8412346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8412451Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8412794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8412912Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8413252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8413349Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8413695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8413963Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8413976Z 2025-08-14T21:44:19.8414073Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8414205Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8414452Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8414539Z return mod(**inputs) 2025-08-14T21:44:19.8414889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8414993Z outputs = self.deberta( 2025-08-14T21:44:19.8415345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8415456Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8420050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8420167Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8420505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8420609Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8420952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8421063Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8421408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8421499Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8421844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8422089Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8422479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8422650Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8422663Z 2025-08-14T21:44:19.8422789Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8423041Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8423124Z return mod(**inputs) 2025-08-14T21:44:19.8423475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8432540Z outputs = self.deberta( 2025-08-14T21:44:19.8432969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8433185Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8433550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8433664Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8433957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8434062Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8434414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8434547Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8434893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8435007Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8435352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8435443Z context_layer = torch.bmm( 2025-08-14T21:44:19.8435457Z 2025-08-14T21:44:19.8435602Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8435866Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8435958Z return mod(**inputs) 2025-08-14T21:44:19.8436312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8436427Z outputs = self.deberta( 2025-08-14T21:44:19.8436781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8436902Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8437249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8437372Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8437650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8437756Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8438094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8438214Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8438568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8438667Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8439013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8439258Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8439272Z 2025-08-14T21:44:19.8439374Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8439474Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8439605Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8439858Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8439945Z return mod(**inputs) 2025-08-14T21:44:19.8440299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8440389Z outputs = self.deberta( 2025-08-14T21:44:19.8440736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8440828Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8441317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8441444Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8441733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8441835Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8442180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8442344Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8442684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8442829Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8443109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8443198Z return self.act(input) 2025-08-14T21:44:19.8443211Z 2025-08-14T21:44:19.8443318Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8443413Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8443507Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8443647Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8443902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8444011Z return mod(**inputs) 2025-08-14T21:44:19.8444380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8444495Z outputs = self.deberta( 2025-08-14T21:44:19.8449240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8449399Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8449835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8449953Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8450237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8450346Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8450692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8450814Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8451161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8451258Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8451600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8451851Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8452249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8452428Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8452443Z 2025-08-14T21:44:19.8452580Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8452840Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8452936Z return mod(**inputs) 2025-08-14T21:44:19.8453295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8453479Z outputs = self.deberta( 2025-08-14T21:44:19.8453823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8453913Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8454260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8454372Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8454653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8454762Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8455102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8455224Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8455565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8455664Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8456012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8456283Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8456296Z 2025-08-14T21:44:19.8456435Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8456747Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8456827Z return mod(**inputs) 2025-08-14T21:44:19.8457211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8457295Z outputs = self.deberta( 2025-08-14T21:44:19.8457645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8457737Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8458075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8458188Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8458466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8458566Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8458917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8459031Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8459468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8459572Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8459970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8460243Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8460255Z 2025-08-14T21:44:19.8460356Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8460493Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8460749Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8460829Z return mod(**inputs) 2025-08-14T21:44:19.8461185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8461271Z outputs = self.deberta( 2025-08-14T21:44:19.8461656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8461758Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8462097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8462209Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8462485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8462584Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8462937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8463052Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8463401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8463497Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8463835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8464085Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8464477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8464666Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8464689Z 2025-08-14T21:44:19.8464817Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8465087Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8465174Z return mod(**inputs) 2025-08-14T21:44:19.8465529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8465615Z outputs = self.deberta( 2025-08-14T21:44:19.8465966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8466054Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8466405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8466512Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8466790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8466898Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8467244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8467362Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8467709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8467804Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8468153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8468244Z context_layer = torch.bmm( 2025-08-14T21:44:19.8468258Z 2025-08-14T21:44:19.8468392Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8468645Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8468731Z return mod(**inputs) 2025-08-14T21:44:19.8469090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8469218Z outputs = self.deberta( 2025-08-14T21:44:19.8469561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8469657Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8469993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8470099Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8470384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8470482Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8470826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8470944Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8471286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8471387Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8471728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8471973Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8471986Z 2025-08-14T21:44:19.8472102Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8472198Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8472333Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8472582Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8472686Z return mod(**inputs) 2025-08-14T21:44:19.8473050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8473134Z outputs = self.deberta( 2025-08-14T21:44:19.8473480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8473571Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8482581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8482713Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8483095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8483214Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8483683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8483870Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8484349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8484512Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8484838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8484927Z return self.act(input) 2025-08-14T21:44:19.8484940Z 2025-08-14T21:44:19.8485059Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8485155Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8485258Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8485388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8485641Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8485738Z return mod(**inputs) 2025-08-14T21:44:19.8486139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8486225Z outputs = self.deberta( 2025-08-14T21:44:19.8486574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8486663Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8487009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8487116Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8487393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8487502Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8487846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8487961Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8490372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8490468Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8490815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8491052Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8491471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8491665Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8491679Z 2025-08-14T21:44:19.8491810Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8492066Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8492148Z return mod(**inputs) 2025-08-14T21:44:19.8492495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8492584Z outputs = self.deberta( 2025-08-14T21:44:19.8492924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8493021Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8493361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8493467Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8493752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8493852Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8494189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8494309Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8494644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8494744Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8495089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8495355Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8495371Z 2025-08-14T21:44:19.8495505Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8495804Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8495892Z return mod(**inputs) 2025-08-14T21:44:19.8496239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8496323Z outputs = self.deberta( 2025-08-14T21:44:19.8496667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8496756Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8497097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8497208Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8497485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8497595Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8497934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8498047Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8498391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8498487Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8498829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8499114Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8499148Z 2025-08-14T21:44:19.8499247Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8499382Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8499634Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8499716Z return mod(**inputs) 2025-08-14T21:44:19.8500070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8500154Z outputs = self.deberta( 2025-08-14T21:44:19.8500502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8500594Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8500932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8501049Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8501330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8501438Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8501780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8501893Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8502245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8502341Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8502756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8503011Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8503461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8503676Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8503689Z 2025-08-14T21:44:19.8503818Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8504070Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8504163Z return mod(**inputs) 2025-08-14T21:44:19.8504509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8504596Z outputs = self.deberta( 2025-08-14T21:44:19.8504938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8505029Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8505375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8505485Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8505768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8505865Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8506206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8506323Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8506668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8506787Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8507129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8507237Z context_layer = torch.bmm( 2025-08-14T21:44:19.8507250Z 2025-08-14T21:44:19.8507383Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8507629Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8507708Z return mod(**inputs) 2025-08-14T21:44:19.8508065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8508151Z outputs = self.deberta( 2025-08-14T21:44:19.8508496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8508586Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8508923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8509038Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8509316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8509412Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8509751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8509867Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8510212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8510303Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8510642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8510886Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8510901Z 2025-08-14T21:44:19.8511000Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8511143Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8511273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8511523Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8511610Z return mod(**inputs) 2025-08-14T21:44:19.8511960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8512042Z outputs = self.deberta( 2025-08-14T21:44:19.8512386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8512477Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8512821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8512928Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8513206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8513308Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8513648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8513800Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8514152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8514317Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8514587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8514706Z return self.act(input) 2025-08-14T21:44:19.8514719Z 2025-08-14T21:44:19.8514814Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8514910Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8515009Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8515135Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8515393Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8515472Z return mod(**inputs) 2025-08-14T21:44:19.8515817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8515906Z outputs = self.deberta( 2025-08-14T21:44:19.8516247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8516333Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8516683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8516788Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8517065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8521338Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8521680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8521803Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8522142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8522249Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8522589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8522825Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8523271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8523433Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8523446Z 2025-08-14T21:44:19.8523580Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8523829Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8523932Z return mod(**inputs) 2025-08-14T21:44:19.8524310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8524393Z outputs = self.deberta( 2025-08-14T21:44:19.8524734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8524828Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8525166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8525276Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8525553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8525650Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8525997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8526133Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8526474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8526590Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8526931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8527202Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8527215Z 2025-08-14T21:44:19.8527347Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8527600Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8527685Z return mod(**inputs) 2025-08-14T21:44:19.8528030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8528120Z outputs = self.deberta( 2025-08-14T21:44:19.8528462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8528553Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8528899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8529006Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8529286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8529382Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8529720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8529836Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8530174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8530268Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8530683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8530948Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8530961Z 2025-08-14T21:44:19.8531062Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8531187Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8531435Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8531521Z return mod(**inputs) 2025-08-14T21:44:19.8531948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8532043Z outputs = self.deberta( 2025-08-14T21:44:19.8532438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8532531Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8532877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8532985Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8533260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8533362Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8533704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8533843Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8534181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8534295Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8534638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8534881Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8535276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8535441Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8535453Z 2025-08-14T21:44:19.8535579Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8535836Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8535918Z return mod(**inputs) 2025-08-14T21:44:19.8536278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8536362Z outputs = self.deberta( 2025-08-14T21:44:19.8536707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8536800Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8537142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8537248Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8537533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8537634Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8537976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8538088Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8538424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8538568Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8538905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8538997Z context_layer = torch.bmm( 2025-08-14T21:44:19.8539009Z 2025-08-14T21:44:19.8539134Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8539380Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8539464Z return mod(**inputs) 2025-08-14T21:44:19.8539807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8539890Z outputs = self.deberta( 2025-08-14T21:44:19.8540233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8540325Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8540669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8540774Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8541051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8541150Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8541489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8541630Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8541969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8542086Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8542434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8542673Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8542685Z 2025-08-14T21:44:19.8542782Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8542882Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8543006Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8543256Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8543337Z return mod(**inputs) 2025-08-14T21:44:19.8543683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8543775Z outputs = self.deberta( 2025-08-14T21:44:19.8544115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8544203Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8544549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8544651Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8544930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8545027Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8545369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8545523Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8545861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8546051Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8550926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8551071Z return self.act(input) 2025-08-14T21:44:19.8551105Z 2025-08-14T21:44:19.8551209Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8551303Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8551394Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8551529Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8551783Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8551868Z return mod(**inputs) 2025-08-14T21:44:19.8552217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8552304Z outputs = self.deberta( 2025-08-14T21:44:19.8552654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8552742Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8553080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8553189Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8553464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8553635Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8553976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8554115Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8554457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8554558Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8554909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8555145Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8555537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8555708Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8555721Z 2025-08-14T21:44:19.8555849Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8556106Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8556188Z return mod(**inputs) 2025-08-14T21:44:19.8556541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8556630Z outputs = self.deberta( 2025-08-14T21:44:19.8556970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8557057Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8557401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8557509Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8557792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8557890Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8558228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8558401Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8558742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8558840Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8559180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8559448Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8559463Z 2025-08-14T21:44:19.8559592Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8559840Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8559922Z return mod(**inputs) 2025-08-14T21:44:19.8560278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8560359Z outputs = self.deberta( 2025-08-14T21:44:19.8560777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8560866Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8561345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8561457Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8561756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8561853Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8562222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8562333Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8562674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8562765Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8563101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8563372Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8563386Z 2025-08-14T21:44:19.8563480Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8563610Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8563863Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8563948Z return mod(**inputs) 2025-08-14T21:44:19.8564304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8564386Z outputs = self.deberta( 2025-08-14T21:44:19.8564733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8564826Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8565168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8565282Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8565556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8565653Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8565996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8566111Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8566499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8566593Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8566930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8567176Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8567566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8567732Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8567747Z 2025-08-14T21:44:19.8567873Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8568128Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8568214Z return mod(**inputs) 2025-08-14T21:44:19.8568564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8568653Z outputs = self.deberta( 2025-08-14T21:44:19.8568999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8569085Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8569455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8569557Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8569854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8569961Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8570304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8570425Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8570760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8570853Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8571200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8571294Z context_layer = torch.bmm( 2025-08-14T21:44:19.8571306Z 2025-08-14T21:44:19.8571436Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8571684Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8571762Z return mod(**inputs) 2025-08-14T21:44:19.8572113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8572198Z outputs = self.deberta( 2025-08-14T21:44:19.8572535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8572627Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8572963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8573076Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8573349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8573448Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8573788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8573941Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8574281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8574379Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8574714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8574956Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8574970Z 2025-08-14T21:44:19.8575069Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8579359Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8579495Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8579792Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8579883Z return mod(**inputs) 2025-08-14T21:44:19.8580234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8580319Z outputs = self.deberta( 2025-08-14T21:44:19.8580665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8580753Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8581090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8581229Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8581506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8581630Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8581972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8582120Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8582462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8582599Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8582874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8582963Z return self.act(input) 2025-08-14T21:44:19.8582975Z 2025-08-14T21:44:19.8583069Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8583167Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8583263Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8583391Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8583644Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8583724Z return mod(**inputs) 2025-08-14T21:44:19.8584078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8584160Z outputs = self.deberta( 2025-08-14T21:44:19.8584499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8584593Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8584934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8585038Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8585321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8585416Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8586352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8586468Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8586810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8586910Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8587250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8587505Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8587902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8588068Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8588084Z 2025-08-14T21:44:19.8588223Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8588472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8588561Z return mod(**inputs) 2025-08-14T21:44:19.8588912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8589002Z outputs = self.deberta( 2025-08-14T21:44:19.8589379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8589472Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8589897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8590027Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8590365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8590472Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8590812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8590926Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8591272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8591365Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8591706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8591978Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8591991Z 2025-08-14T21:44:19.8592122Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8592378Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8592456Z return mod(**inputs) 2025-08-14T21:44:19.8592807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8592894Z outputs = self.deberta( 2025-08-14T21:44:19.8593234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8593329Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8593667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8593776Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8594104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8594202Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8594545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8594655Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8594994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8595092Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8595431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8595693Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8595710Z 2025-08-14T21:44:19.8595804Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8595931Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8596182Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8596262Z return mod(**inputs) 2025-08-14T21:44:19.8596609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8596697Z outputs = self.deberta( 2025-08-14T21:44:19.8597037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8597160Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8597500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8597626Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8597909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8598002Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8598343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8598458Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8598792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8598888Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8599226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8599464Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8599863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8600024Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8600036Z 2025-08-14T21:44:19.8600168Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8600421Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8600501Z return mod(**inputs) 2025-08-14T21:44:19.8600859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8600945Z outputs = self.deberta( 2025-08-14T21:44:19.8601372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8601478Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8601885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8602004Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8602283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8602383Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8602734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8602849Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8603195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8603291Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8603631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8603724Z context_layer = torch.bmm( 2025-08-14T21:44:19.8603737Z 2025-08-14T21:44:19.8603868Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8604121Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8608439Z return mod(**inputs) 2025-08-14T21:44:19.8608840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8608933Z outputs = self.deberta( 2025-08-14T21:44:19.8609300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8609392Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8609765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8609874Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8610166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8610268Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8610609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8610732Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8611071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8611168Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8611518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8611761Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8611775Z 2025-08-14T21:44:19.8611885Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8611982Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8612312Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8612566Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8612645Z return mod(**inputs) 2025-08-14T21:44:19.8612997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8613083Z outputs = self.deberta( 2025-08-14T21:44:19.8613424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8613519Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8613860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8614009Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8614296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8614392Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8614739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8614888Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8615226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8615370Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8615638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8615726Z return self.act(input) 2025-08-14T21:44:19.8615738Z 2025-08-14T21:44:19.8615835Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8615929Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8616026Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8616153Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8616402Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8616485Z return mod(**inputs) 2025-08-14T21:44:19.8616833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8616946Z outputs = self.deberta( 2025-08-14T21:44:19.8617288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8617397Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8617745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8617851Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8618128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8618229Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8618567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8618766Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8619135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8619250Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8619593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8619833Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8620228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8620391Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8620404Z 2025-08-14T21:44:19.8620533Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8620785Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8620871Z return mod(**inputs) 2025-08-14T21:44:19.8621227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8621313Z outputs = self.deberta( 2025-08-14T21:44:19.8621700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8621797Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8622137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8622243Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8622527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8622623Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8622971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8623084Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8623423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8623527Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8623867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8624134Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8624147Z 2025-08-14T21:44:19.8624272Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8624523Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8624633Z return mod(**inputs) 2025-08-14T21:44:19.8624981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8625088Z outputs = self.deberta( 2025-08-14T21:44:19.8625432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8625523Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8625874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8625977Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8626254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8626357Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8626695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8626816Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8627157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8627253Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8627601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8627866Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8627880Z 2025-08-14T21:44:19.8627983Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8628106Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8628352Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8628438Z return mod(**inputs) 2025-08-14T21:44:19.8628785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8628870Z outputs = self.deberta( 2025-08-14T21:44:19.8629216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8629353Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8629697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8629802Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8630081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8630185Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8630527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8630640Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8630986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8631079Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8631428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8631670Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8632060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8632229Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8632263Z 2025-08-14T21:44:19.8632389Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8632642Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8632747Z return mod(**inputs) 2025-08-14T21:44:19.8633096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8641384Z outputs = self.deberta( 2025-08-14T21:44:19.8641855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8642003Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8642469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8642586Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8642964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8643072Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8643539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8643673Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8644147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8644257Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8644734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8644826Z context_layer = torch.bmm( 2025-08-14T21:44:19.8644840Z 2025-08-14T21:44:19.8644991Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8645248Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8645333Z return mod(**inputs) 2025-08-14T21:44:19.8645680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8645766Z outputs = self.deberta( 2025-08-14T21:44:19.8646158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8646250Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8646593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8646701Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8646979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8647081Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8647421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8647535Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8650545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8650644Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8650989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8651225Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8651237Z 2025-08-14T21:44:19.8651336Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8651437Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8651595Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8651842Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8651927Z return mod(**inputs) 2025-08-14T21:44:19.8652299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8652387Z outputs = self.deberta( 2025-08-14T21:44:19.8652733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8652824Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8653175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8653283Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8653561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8653669Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8654009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8654169Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8654515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8654653Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8654927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8655011Z return self.act(input) 2025-08-14T21:44:19.8655023Z 2025-08-14T21:44:19.8655122Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8655213Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8655304Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8655438Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8655687Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8655768Z return mod(**inputs) 2025-08-14T21:44:19.8656119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8656271Z outputs = self.deberta( 2025-08-14T21:44:19.8656619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8656711Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8657049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8657160Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8657440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8657542Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8657887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8658007Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8658356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8658451Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8658790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8659030Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8659422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8660099Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8660366Z 2025-08-14T21:44:19.8660505Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8660947Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8661351Z return mod(**inputs) 2025-08-14T21:44:19.8661832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8662427Z outputs = self.deberta( 2025-08-14T21:44:19.8662962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8663472Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8663971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8664488Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8664951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8665410Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8665935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8666468Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8666996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8667516Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8668019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8668695Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8669039Z 2025-08-14T21:44:19.8669169Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8669618Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8670027Z return mod(**inputs) 2025-08-14T21:44:19.8670546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8671228Z outputs = self.deberta( 2025-08-14T21:44:19.8671707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8672208Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8672701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8673223Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8673678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8674137Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8674647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8675174Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8675698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8676202Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8680827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8681558Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8681921Z 2025-08-14T21:44:19.8682023Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8682301Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8682770Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8683165Z return mod(**inputs) 2025-08-14T21:44:19.8683645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8684142Z outputs = self.deberta( 2025-08-14T21:44:19.8684614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8685114Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8685604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8686124Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8686577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8687023Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8687523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8688054Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8688578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8689086Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8689582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8690238Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8690932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8691671Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8691909Z 2025-08-14T21:44:19.8692096Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8692545Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8692990Z return mod(**inputs) 2025-08-14T21:44:19.8693463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8694007Z outputs = self.deberta( 2025-08-14T21:44:19.8694486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8695038Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8695523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8696040Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8696498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8696943Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8697446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8697973Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8698495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8698999Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8699525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8700017Z context_layer = torch.bmm( 2025-08-14T21:44:19.8700185Z 2025-08-14T21:44:19.8700320Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8700763Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8701160Z return mod(**inputs) 2025-08-14T21:44:19.8701636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8702129Z outputs = self.deberta( 2025-08-14T21:44:19.8702601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8703097Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8703589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8704110Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8704562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8705003Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8705507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8710270Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8710786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8711296Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8711801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8712456Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8712765Z 2025-08-14T21:44:19.8712866Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8713118Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8713403Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8713894Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8714287Z return mod(**inputs) 2025-08-14T21:44:19.8714763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8715259Z outputs = self.deberta( 2025-08-14T21:44:19.8715726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8716229Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8716725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8717242Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8717688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8718133Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8718635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8719189Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8719750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8720370Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8720914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8721407Z return self.act(input) 2025-08-14T21:44:19.8721550Z 2025-08-14T21:44:19.8721672Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8721925Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8722173Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8722451Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8722939Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8723342Z return mod(**inputs) 2025-08-14T21:44:19.8723813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8724317Z outputs = self.deberta( 2025-08-14T21:44:19.8724839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8725348Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8725840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8726362Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8726821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8727259Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8727761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8728285Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8728806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8729304Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8729808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8730463Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8731206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8731823Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8732064Z 2025-08-14T21:44:19.8732195Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8732640Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8733032Z return mod(**inputs) 2025-08-14T21:44:19.8733507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8734011Z outputs = self.deberta( 2025-08-14T21:44:19.8734487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8739228Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8739731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8740253Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8740709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8741152Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8741659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8742196Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8742742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8743254Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8743773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8744456Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8744795Z 2025-08-14T21:44:19.8744923Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8745358Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8745755Z return mod(**inputs) 2025-08-14T21:44:19.8746231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8746726Z outputs = self.deberta( 2025-08-14T21:44:19.8747198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8747695Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8748180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8749348Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8749923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8750367Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8750865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8751392Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8751962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8752468Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8752962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8753754Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8754094Z 2025-08-14T21:44:19.8754200Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8754476Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8754920Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8755318Z return mod(**inputs) 2025-08-14T21:44:19.8755885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8756602Z outputs = self.deberta( 2025-08-14T21:44:19.8757280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8758055Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8758579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8759091Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8759547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8759994Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8760491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8761023Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8761860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8762527Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8763026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8772048Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8773011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8773871Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8774182Z 2025-08-14T21:44:19.8774333Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8774860Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8775264Z return mod(**inputs) 2025-08-14T21:44:19.8775745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8776249Z outputs = self.deberta( 2025-08-14T21:44:19.8776732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8777233Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8777720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8778300Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8778803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8779250Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8779747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8780277Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8780794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8781294Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8781847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8782342Z context_layer = torch.bmm( 2025-08-14T21:44:19.8782488Z 2025-08-14T21:44:19.8782619Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8783056Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8783453Z return mod(**inputs) 2025-08-14T21:44:19.8783927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8784432Z outputs = self.deberta( 2025-08-14T21:44:19.8784898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8785400Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8785897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8786411Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8786870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8787315Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8787820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8788340Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8788880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8789384Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8789906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8790551Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8790862Z 2025-08-14T21:44:19.8790960Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8791213Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8791487Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8791932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8792334Z return mod(**inputs) 2025-08-14T21:44:19.8792901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8793468Z outputs = self.deberta( 2025-08-14T21:44:19.8793956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8794470Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8794966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8795494Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8795953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8796400Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8796901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8797514Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8798074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8798627Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8799140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8799570Z return self.act(input) 2025-08-14T21:44:19.8799706Z 2025-08-14T21:44:19.8799812Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8800060Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8800303Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8800582Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8801024Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8801491Z return mod(**inputs) 2025-08-14T21:44:19.8801967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8802468Z outputs = self.deberta( 2025-08-14T21:44:19.8802943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8803448Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8803944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8804468Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8804919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8805367Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8805872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8820411Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8821042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8821790Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8822375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8823025Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8823712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8824346Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8824580Z 2025-08-14T21:44:19.8824718Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8825176Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8825576Z return mod(**inputs) 2025-08-14T21:44:19.8826063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8826641Z outputs = self.deberta( 2025-08-14T21:44:19.8827123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8827634Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8828135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8828659Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8829115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8829569Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8830083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8830622Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8831207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8831716Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8832222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8832893Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8833240Z 2025-08-14T21:44:19.8833376Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8833825Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8834233Z return mod(**inputs) 2025-08-14T21:44:19.8834712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8835226Z outputs = self.deberta( 2025-08-14T21:44:19.8835708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8840413Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8840908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8841514Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8841977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8842456Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8842970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8843530Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8844064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8844564Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8845066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8845747Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8846083Z 2025-08-14T21:44:19.8846190Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8846476Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8846925Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8847328Z return mod(**inputs) 2025-08-14T21:44:19.8847805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8848309Z outputs = self.deberta( 2025-08-14T21:44:19.8849161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8849666Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8850157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8850746Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8851262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8851710Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8852216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8852751Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8853361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8853865Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8854364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8855016Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8855779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8856396Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8856636Z 2025-08-14T21:44:19.8856772Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8857229Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8857631Z return mod(**inputs) 2025-08-14T21:44:19.8858105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8858605Z outputs = self.deberta( 2025-08-14T21:44:19.8859085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8859591Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8860086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8860650Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8861108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8861577Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8862085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8862609Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8863126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8863625Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8864126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8864633Z context_layer = torch.bmm( 2025-08-14T21:44:19.8864777Z 2025-08-14T21:44:19.8864908Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8869527Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8869944Z return mod(**inputs) 2025-08-14T21:44:19.8870426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8870926Z outputs = self.deberta( 2025-08-14T21:44:19.8871404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8871908Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8872397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8872920Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8873373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8873829Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8874329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8874858Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8875429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8875939Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8876438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8877090Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8877400Z 2025-08-14T21:44:19.8877506Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8877766Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8878035Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8878484Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8878891Z return mod(**inputs) 2025-08-14T21:44:19.8879373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8879941Z outputs = self.deberta( 2025-08-14T21:44:19.8880462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8880963Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8881529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8882051Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8882540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8882984Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8883508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8884073Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8884672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8885219Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8885686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8886110Z return self.act(input) 2025-08-14T21:44:19.8886248Z 2025-08-14T21:44:19.8886350Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8886606Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8886850Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8887130Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8887577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8887972Z return mod(**inputs) 2025-08-14T21:44:19.8888497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8889002Z outputs = self.deberta( 2025-08-14T21:44:19.8889473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8889971Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8890462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8890983Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8891435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8891883Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8892390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8892978Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8893494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8893999Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8898732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8899374Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8900072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8900700Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8900935Z 2025-08-14T21:44:19.8901074Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8901515Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8901919Z return mod(**inputs) 2025-08-14T21:44:19.8902395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8902889Z outputs = self.deberta( 2025-08-14T21:44:19.8903357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8903891Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8904382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8904918Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8905367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8905822Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8906330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8906854Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8907375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8907879Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8908379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8909134Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8909499Z 2025-08-14T21:44:19.8909627Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8910078Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8910476Z return mod(**inputs) 2025-08-14T21:44:19.8910941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8911442Z outputs = self.deberta( 2025-08-14T21:44:19.8911910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8912403Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8912898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8913419Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8913874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8914312Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8914876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8915394Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8915915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8916418Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8916912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8917580Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8917920Z 2025-08-14T21:44:19.8918018Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8918298Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8918748Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8919139Z return mod(**inputs) 2025-08-14T21:44:19.8919618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8920124Z outputs = self.deberta( 2025-08-14T21:44:19.8920593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8921091Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8921694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8922208Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8922679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8923126Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8932213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8932912Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8933598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8934237Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8934732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8935385Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8936085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8936711Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8936945Z 2025-08-14T21:44:19.8937081Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8937525Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8940031Z return mod(**inputs) 2025-08-14T21:44:19.8940514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8941017Z outputs = self.deberta( 2025-08-14T21:44:19.8941493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8941994Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8942485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8942993Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8943512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8943960Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8944463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8944982Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8945505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8946010Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8946505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8946994Z context_layer = torch.bmm( 2025-08-14T21:44:19.8947148Z 2025-08-14T21:44:19.8947278Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8947718Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8948112Z return mod(**inputs) 2025-08-14T21:44:19.8948587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8949436Z outputs = self.deberta( 2025-08-14T21:44:19.8949913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8950466Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8950960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8951516Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8951979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8952494Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8953050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8953576Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8954091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8954596Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8955098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8955744Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8956046Z 2025-08-14T21:44:19.8956146Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8956401Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8956686Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8957183Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8957581Z return mod(**inputs) 2025-08-14T21:44:19.8958058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8958555Z outputs = self.deberta( 2025-08-14T21:44:19.8959021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8959531Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8960016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8960536Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8961038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8961573Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8962075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.8962635Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.8963195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.8963749Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.8964226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.8964655Z return self.act(input) 2025-08-14T21:44:19.8964799Z 2025-08-14T21:44:19.8964897Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8965148Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8965393Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8965670Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8966116Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8966517Z return mod(**inputs) 2025-08-14T21:44:19.8971059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8971149Z outputs = self.deberta( 2025-08-14T21:44:19.8971528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8971618Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8971985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8972093Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8972374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8972478Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8972819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8972937Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8973282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8973380Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8973725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.8973970Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.8974363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8974531Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8974544Z 2025-08-14T21:44:19.8974680Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8974938Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8975018Z return mod(**inputs) 2025-08-14T21:44:19.8975370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8975459Z outputs = self.deberta( 2025-08-14T21:44:19.8975804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8975893Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8976278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8976387Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8976671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8976771Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8977107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8977231Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8977570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8977671Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8978013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8978282Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8978295Z 2025-08-14T21:44:19.8978431Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8978683Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8978771Z return mod(**inputs) 2025-08-14T21:44:19.8979118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8979224Z outputs = self.deberta( 2025-08-14T21:44:19.8979569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8979680Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8980028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8980137Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8980415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8980514Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8980855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8980970Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8981384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8981482Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8981872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.8982142Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.8982155Z 2025-08-14T21:44:19.8982250Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8982382Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8982631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8982710Z return mod(**inputs) 2025-08-14T21:44:19.8983062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8983148Z outputs = self.deberta( 2025-08-14T21:44:19.8983491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8983583Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8983973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8984085Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8984363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8984459Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8984805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8984919Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8985261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8985356Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8985693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.8985995Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.8986389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.8986556Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.8986569Z 2025-08-14T21:44:19.8986696Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8986972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8987061Z return mod(**inputs) 2025-08-14T21:44:19.8987407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8987536Z outputs = self.deberta( 2025-08-14T21:44:19.8987883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8987973Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8988318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8988426Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8988704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8988810Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8989155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8989278Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8989615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8989712Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8990056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.8990146Z context_layer = torch.bmm( 2025-08-14T21:44:19.8990158Z 2025-08-14T21:44:19.8990296Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8990544Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8990630Z return mod(**inputs) 2025-08-14T21:44:19.8990986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8991069Z outputs = self.deberta( 2025-08-14T21:44:19.8991417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8991515Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.8991914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.8992024Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.8992306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.8992401Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.8992744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.8992857Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.8993202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.8993296Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.8993636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.8993878Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.8993891Z 2025-08-14T21:44:19.8993986Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8994078Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.8994208Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.8994455Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.8994565Z return mod(**inputs) 2025-08-14T21:44:19.8994912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.8995023Z outputs = self.deberta( 2025-08-14T21:44:19.8995374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.8995463Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9000045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9000169Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9000449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9000560Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9000901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.9001055Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.9001476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.9001621Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.9001896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.9001980Z return self.act(input) 2025-08-14T21:44:19.9001993Z 2025-08-14T21:44:19.9002090Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9002191Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9002285Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9002411Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9002669Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9002752Z return mod(**inputs) 2025-08-14T21:44:19.9003108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9003191Z outputs = self.deberta( 2025-08-14T21:44:19.9003577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9003673Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9004012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9004115Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9004397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9004495Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9004837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.9004955Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.9005289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.9005390Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.9005728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.9005974Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.9006363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.9006551Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.9006565Z 2025-08-14T21:44:19.9006703Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9006951Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9007064Z return mod(**inputs) 2025-08-14T21:44:19.9007417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9007500Z outputs = self.deberta( 2025-08-14T21:44:19.9007848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9007941Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9008282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9008402Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9008682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9008785Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9009126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.9009243Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.9009589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.9009683Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.9010029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.9010367Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.9010383Z 2025-08-14T21:44:19.9010514Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9010820Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9010908Z return mod(**inputs) 2025-08-14T21:44:19.9011298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9011388Z outputs = self.deberta( 2025-08-14T21:44:19.9011728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9011828Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9012167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9012271Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9012558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9012658Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9013008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.9013120Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.9013463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.9013564Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.9013902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.9014164Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.9014188Z 2025-08-14T21:44:19.9014310Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9014437Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9014691Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9014843Z return mod(**inputs) 2025-08-14T21:44:19.9015199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9015294Z outputs = self.deberta( 2025-08-14T21:44:19.9015637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9015733Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9016080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9016186Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9016474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9016572Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9016914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.9017032Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.9017375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.9017477Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.9017816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.9018055Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.9018458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.9018622Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.9018637Z 2025-08-14T21:44:19.9018773Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9019067Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9019152Z return mod(**inputs) 2025-08-14T21:44:19.9019514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9019600Z outputs = self.deberta( 2025-08-14T21:44:19.9019946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9020032Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9020376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9020490Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9020771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9020867Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9021216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.9021330Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.9021673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.9021766Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.9022109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.9022225Z context_layer = torch.bmm( 2025-08-14T21:44:19.9022238Z 2025-08-14T21:44:19.9022368Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9022667Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9022749Z return mod(**inputs) 2025-08-14T21:44:19.9023098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9023188Z outputs = self.deberta( 2025-08-14T21:44:19.9023525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9023614Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9023962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9024068Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9024349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9024447Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9029022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.9029154Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.9029496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.9029596Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.9029937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.9030178Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.9030194Z 2025-08-14T21:44:19.9030299Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9030394Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9030523Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9030780Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9030863Z return mod(**inputs) 2025-08-14T21:44:19.9031269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9031355Z outputs = self.deberta( 2025-08-14T21:44:19.9031694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9031790Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9032132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9032240Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9032529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9032631Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9032980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.9033130Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.9033471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.9033618Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.9033886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.9034026Z return self.act(input) 2025-08-14T21:44:19.9034039Z 2025-08-14T21:44:19.9034140Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9034236Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9034359Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9034487Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9034739Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9034823Z return mod(**inputs) 2025-08-14T21:44:19.9035172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9035266Z outputs = self.deberta( 2025-08-14T21:44:19.9035607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9035697Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9036045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9036149Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9036431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9036533Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9036872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.9036992Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.9037328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.9037422Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.9037768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:19.9038010Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:19.9038409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.9038575Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.9038631Z 2025-08-14T21:44:19.9038761Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9039015Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9039101Z return mod(**inputs) 2025-08-14T21:44:19.9039527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9039640Z outputs = self.deberta( 2025-08-14T21:44:19.9039997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9040095Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9040438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9040545Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9040829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9040928Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9041357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.9041474Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.9041813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.9041941Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.9042281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.9042577Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.9042590Z 2025-08-14T21:44:19.9042721Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9042972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9043065Z return mod(**inputs) 2025-08-14T21:44:19.9043416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9043501Z outputs = self.deberta( 2025-08-14T21:44:19.9043901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9043993Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9044340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9044450Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9044729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9044833Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9045171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.9045290Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.9045627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.9045721Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.9046068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:19.9046331Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:19.9046344Z 2025-08-14T21:44:19.9046449Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9046619Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9046869Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9046956Z return mod(**inputs) 2025-08-14T21:44:19.9047305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9047389Z outputs = self.deberta( 2025-08-14T21:44:19.9047740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9047831Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9048234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9048342Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9048621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9049087Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9049437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.9049550Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.9049895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.9050043Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.9050390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:19.9050669Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:19.9051060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:19.9051229Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:19.9051242Z 2025-08-14T21:44:19.9051371Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9051625Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9051705Z return mod(**inputs) 2025-08-14T21:44:19.9052053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9052148Z outputs = self.deberta( 2025-08-14T21:44:19.9052490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9052586Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9052926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9053033Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9053317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9053416Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9057924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.9058048Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.9058388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.9058488Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.9058831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:19.9058983Z context_layer = torch.bmm( 2025-08-14T21:44:19.9058996Z 2025-08-14T21:44:19.9059136Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9059385Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9059474Z return mod(**inputs) 2025-08-14T21:44:19.9059822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9059907Z outputs = self.deberta( 2025-08-14T21:44:19.9060254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9060345Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9060686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9060794Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9061079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9061185Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9061526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:19.9061638Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:19.9061982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:19.9062120Z self_output, att_matrix = self.self( 2025-08-14T21:44:19.9062466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:19.9062729Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:19.9062742Z 2025-08-14T21:44:19.9062842Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9062945Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9063071Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9063319Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9063409Z return mod(**inputs) 2025-08-14T21:44:19.9063761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:19.9063855Z outputs = self.deberta( 2025-08-14T21:44:19.9064193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:19.9064285Z encoder_outputs = self.encoder( 2025-08-14T21:44:19.9064631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:19.9064735Z output_states, attn_weights = layer_module( 2025-08-14T21:44:19.9065019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:19.9065118Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:19.9065457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:19.9065615Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:19.9065957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:19.9066096Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:19.9066372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:19.9066458Z return self.act(input) 2025-08-14T21:44:19.9066470Z 2025-08-14T21:44:19.9066616Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9066711Z cudagraph partition due to non gpu ops 2025-08-14T21:44:19.9066839Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9067097Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9067179Z return mod(**inputs) 2025-08-14T21:44:19.9067528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1262, in forward 2025-08-14T21:44:19.9067663Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:44:19.9067676Z 2025-08-14T21:44:19.9067802Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:19.9068055Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:19.9068135Z return mod(**inputs) 2025-08-14T21:44:19.9068562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1263, in forward 2025-08-14T21:44:19.9068729Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:44:19.9068742Z 2025-08-14T21:44:32.7965919Z Compilation time (from dynamo_timed): 38.247354016 2025-08-14T21:44:32.7966292Z pass 2025-08-14T21:44:32.7966783Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:32.7968279Z TIMING: _recursive_pre_grad_passes:0.12514 _recursive_joint_graph_passes:1.50613 _recursive_post_grad_passes:0.38785 async_compile.wait:0.63691 code_gen:10.48416 inductor_compile:16.90587 backend_compile:31.28828 gc:0.00061 entire_frame_compile:38.24735 total_wall_time:38.24735 2025-08-14T21:44:32.7970769Z STATS: call_* op count: 1087 | FakeTensorMode.__torch_dispatch__:57230 | FakeTensor.__torch_dispatch__:9191 | ProxyTorchDispatchMode.__torch_dispatch__:13100 2025-08-14T21:44:32.7971493Z Dynamo produced 1 graphs covering 1087 ops with 0 graph breaks (0 unique) 2025-08-14T21:44:39.6698661Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:44:39.6699771Z from pkg_resources import resource_filename 2025-08-14T21:44:40.4060037Z 2025-08-14T21:44:41.6681538Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:44:41.6681883Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:44:41.6694649Z cpu eval DistilBertForMaskedLM 2025-08-14T21:44:42.1045488Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:42.2466742Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:42.3943255Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:51.5584995Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5585378Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5585676Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5585923Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5586177Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5586430Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5586673Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5586919Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5587196Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5587432Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5587675Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5587929Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5588163Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5588452Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5589325Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5589759Z return mod(**inputs) 2025-08-14T21:44:51.5590283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:51.5590827Z dlbrt_output = self.distilbert( 2025-08-14T21:44:51.5591396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:51.5591923Z return self.transformer( 2025-08-14T21:44:51.5592416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:51.5592923Z layer_outputs = layer_module( 2025-08-14T21:44:51.5593416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:51.5593873Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:51.5594399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:44:51.5594905Z sa_output = self.attention( 2025-08-14T21:44:51.5595396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:44:51.5595982Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:44:51.5596220Z 2025-08-14T21:44:51.5596384Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5596628Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5596908Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5597356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5597802Z return mod(**inputs) 2025-08-14T21:44:51.5598295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:51.5598821Z dlbrt_output = self.distilbert( 2025-08-14T21:44:51.5599328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:51.5599841Z return self.transformer( 2025-08-14T21:44:51.5600334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:51.5600847Z layer_outputs = layer_module( 2025-08-14T21:44:51.5601334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:51.5601798Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:51.5602322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:44:51.5602895Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:44:51.5603500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:44:51.5612823Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:44:51.5613709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:44:51.5614378Z return forward_fn(*input_tensors) 2025-08-14T21:44:51.5614939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:44:51.5615455Z x = self.activation(x) 2025-08-14T21:44:51.5615864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:51.5616285Z return self.act(input) 2025-08-14T21:44:51.5616436Z 2025-08-14T21:44:51.5616633Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5616888Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5617143Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5617389Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5617636Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5617933Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5618251Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5618497Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5618788Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5619229Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5619637Z return mod(**inputs) 2025-08-14T21:44:51.5620121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:51.5620638Z dlbrt_output = self.distilbert( 2025-08-14T21:44:51.5621141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:51.5621652Z return self.transformer( 2025-08-14T21:44:51.5622186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:51.5622695Z layer_outputs = layer_module( 2025-08-14T21:44:51.5623119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:51.5623608Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:51.5624123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:44:51.5624657Z sa_output = self.attention( 2025-08-14T21:44:51.5625149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:44:51.5625727Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:44:51.5625961Z 2025-08-14T21:44:51.5626084Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5626359Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5626641Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5627084Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5627477Z return mod(**inputs) 2025-08-14T21:44:51.5627960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:51.5628470Z dlbrt_output = self.distilbert( 2025-08-14T21:44:51.5628980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:51.5629485Z return self.transformer( 2025-08-14T21:44:51.5629978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:51.5630483Z layer_outputs = layer_module( 2025-08-14T21:44:51.5630907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:51.5631359Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:51.5631880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:44:51.5632483Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:44:51.5633113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:44:51.5633790Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:44:51.5634542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:44:51.5635045Z return forward_fn(*input_tensors) 2025-08-14T21:44:51.5635555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:44:51.5636068Z x = self.activation(x) 2025-08-14T21:44:51.5636471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:51.5636893Z return self.act(input) 2025-08-14T21:44:51.5637030Z 2025-08-14T21:44:51.5637128Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5637391Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5637641Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5637884Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5638142Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5638393Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5638628Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5638880Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5639162Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5639599Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5640006Z return mod(**inputs) 2025-08-14T21:44:51.5640488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:51.5641030Z dlbrt_output = self.distilbert( 2025-08-14T21:44:51.5641617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:51.5642169Z return self.transformer( 2025-08-14T21:44:51.5642672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:51.5643188Z layer_outputs = layer_module( 2025-08-14T21:44:51.5643609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:51.5644059Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:51.5644575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:44:51.5645077Z sa_output = self.attention( 2025-08-14T21:44:51.5663560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:44:51.5664186Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:44:51.5664434Z 2025-08-14T21:44:51.5664550Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5664800Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5665111Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5665656Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5666073Z return mod(**inputs) 2025-08-14T21:44:51.5666567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:51.5667099Z dlbrt_output = self.distilbert( 2025-08-14T21:44:51.5667620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:51.5668139Z return self.transformer( 2025-08-14T21:44:51.5668640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:51.5669155Z layer_outputs = layer_module( 2025-08-14T21:44:51.5669596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:51.5670221Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:51.5670761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:44:51.5671332Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:44:51.5671893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:44:51.5672576Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:44:51.5673243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:44:51.5673752Z return forward_fn(*input_tensors) 2025-08-14T21:44:51.5674274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:44:51.5674794Z x = self.activation(x) 2025-08-14T21:44:51.5675204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:51.5675634Z return self.act(input) 2025-08-14T21:44:51.5675789Z 2025-08-14T21:44:51.5675933Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5680399Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5680662Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5680954Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5681340Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5681587Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5681835Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5682093Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5682409Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5682861Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5683281Z return mod(**inputs) 2025-08-14T21:44:51.5683769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:51.5684282Z dlbrt_output = self.distilbert( 2025-08-14T21:44:51.5684797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:51.5685309Z return self.transformer( 2025-08-14T21:44:51.5685794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:51.5686320Z layer_outputs = layer_module( 2025-08-14T21:44:51.5686762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:51.5687225Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:51.5687743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:44:51.5688259Z sa_output = self.attention( 2025-08-14T21:44:51.5688758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:44:51.5689346Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:44:51.5689585Z 2025-08-14T21:44:51.5689686Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5689943Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5690248Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5690811Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5691223Z return mod(**inputs) 2025-08-14T21:44:51.5691709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:51.5692280Z dlbrt_output = self.distilbert( 2025-08-14T21:44:51.5692787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:51.5693299Z return self.transformer( 2025-08-14T21:44:51.5693795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:51.5694295Z layer_outputs = layer_module( 2025-08-14T21:44:51.5694785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:51.5695244Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:51.5695769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:44:51.5696326Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:44:51.5696896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:44:51.5697582Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:44:51.5698237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:44:51.5698732Z return forward_fn(*input_tensors) 2025-08-14T21:44:51.5699257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:44:51.5699807Z x = self.activation(x) 2025-08-14T21:44:51.5700203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:51.5700649Z return self.act(input) 2025-08-14T21:44:51.5700792Z 2025-08-14T21:44:51.5700895Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5701157Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5701400Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5701645Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5701891Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5702131Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5702377Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5702621Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5702901Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5703348Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5703751Z return mod(**inputs) 2025-08-14T21:44:51.5704234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:51.5704752Z dlbrt_output = self.distilbert( 2025-08-14T21:44:51.5709534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:51.5710057Z return self.transformer( 2025-08-14T21:44:51.5710547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:51.5711062Z layer_outputs = layer_module( 2025-08-14T21:44:51.5711500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:51.5711956Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:51.5712476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:44:51.5712995Z sa_output = self.attention( 2025-08-14T21:44:51.5713493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:44:51.5714144Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:44:51.5714386Z 2025-08-14T21:44:51.5714486Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5714745Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5715034Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5715474Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5715881Z return mod(**inputs) 2025-08-14T21:44:51.5716373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:51.5716904Z dlbrt_output = self.distilbert( 2025-08-14T21:44:51.5717405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:51.5717917Z return self.transformer( 2025-08-14T21:44:51.5718413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:51.5718929Z layer_outputs = layer_module( 2025-08-14T21:44:51.5719413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:51.5719942Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:51.5720464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:44:51.5721016Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:44:51.5721672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:44:51.5722377Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:44:51.5723037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:44:51.5723573Z return forward_fn(*input_tensors) 2025-08-14T21:44:51.5724093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:44:51.5724610Z x = self.activation(x) 2025-08-14T21:44:51.5725007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:51.5725428Z return self.act(input) 2025-08-14T21:44:51.5725574Z 2025-08-14T21:44:51.5725670Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5725923Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5726162Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5726408Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5726656Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5726890Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5727131Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5727381Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5727655Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5728101Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5728500Z return mod(**inputs) 2025-08-14T21:44:51.5728979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:51.5729490Z dlbrt_output = self.distilbert( 2025-08-14T21:44:51.5730001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:51.5730514Z return self.transformer( 2025-08-14T21:44:51.5731004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:51.5731516Z layer_outputs = layer_module( 2025-08-14T21:44:51.5731997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:51.5732453Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:51.5732965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:44:51.5733474Z sa_output = self.attention( 2025-08-14T21:44:51.5734025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:44:51.5738848Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:44:51.5739083Z 2025-08-14T21:44:51.5739182Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5739436Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5739724Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5740169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5740579Z return mod(**inputs) 2025-08-14T21:44:51.5741068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:51.5741586Z dlbrt_output = self.distilbert( 2025-08-14T21:44:51.5742091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:51.5742604Z return self.transformer( 2025-08-14T21:44:51.5743132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:51.5743637Z layer_outputs = layer_module( 2025-08-14T21:44:51.5744097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:51.5744547Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:51.5745078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:44:51.5745628Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:44:51.5746186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:44:51.5746862Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:44:51.5747517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:44:51.5748017Z return forward_fn(*input_tensors) 2025-08-14T21:44:51.5748584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:44:51.5749480Z x = self.activation(x) 2025-08-14T21:44:51.5749885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:51.5750313Z return self.act(input) 2025-08-14T21:44:51.5750460Z 2025-08-14T21:44:51.5750558Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5750817Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5751059Z cudagraph partition due to non gpu ops 2025-08-14T21:44:51.5751345Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:51.5751792Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:51.5752194Z return mod(**inputs) 2025-08-14T21:44:51.5752717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 843, in forward 2025-08-14T21:44:51.5753392Z mlm_loss = self.mlm_loss_fct(prediction_logits.view(-1, prediction_logits.size(-1)), labels.view(-1)) 2025-08-14T21:44:51.5753703Z 2025-08-14T21:44:56.4133922Z Compilation time (from dynamo_timed): 12.629028399 2025-08-14T21:44:56.4143101Z pass 2025-08-14T21:44:56.4143537Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:56.4144913Z TIMING: _recursive_pre_grad_passes:0.02706 _recursive_joint_graph_passes:0.36684 _recursive_post_grad_passes:0.06977 async_compile.wait:0.89436 code_gen:4.45352 inductor_compile:7.40312 backend_compile:10.7328 gc:0.00012 entire_frame_compile:12.62903 total_wall_time:12.62903 2025-08-14T21:44:56.4146357Z STATS: call_* op count: 153 | FakeTensorMode.__torch_dispatch__:12821 | FakeTensor.__torch_dispatch__:2081 | ProxyTorchDispatchMode.__torch_dispatch__:2801 2025-08-14T21:44:56.4146987Z Dynamo produced 1 graphs covering 153 ops with 0 graph breaks (0 unique) 2025-08-14T21:45:02.6480788Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:45:02.6481956Z from pkg_resources import resource_filename 2025-08-14T21:45:03.4367694Z 2025-08-14T21:45:04.3199802Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:45:04.3200138Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:45:04.3212074Z cpu eval DistilBertForQuestionAnswering 2025-08-14T21:45:04.7301359Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:04.8547395Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:04.9893324Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:14.1708913Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1709296Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1709569Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1709851Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1710123Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1710368Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1710614Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1710863Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1711101Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1711345Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1711602Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1711852Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1712089Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1712388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1712870Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1713298Z return mod(**inputs) 2025-08-14T21:45:14.1713829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:14.1714375Z distilbert_output = self.distilbert( 2025-08-14T21:45:14.1714914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:14.1715445Z return self.transformer( 2025-08-14T21:45:14.1715949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:14.1722839Z layer_outputs = layer_module( 2025-08-14T21:45:14.1723323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:14.1723779Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:14.1724338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:45:14.1725136Z sa_output = self.attention( 2025-08-14T21:45:14.1725645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:45:14.1726236Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:14.1726476Z 2025-08-14T21:45:14.1726580Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1726838Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1727128Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1727582Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1727992Z return mod(**inputs) 2025-08-14T21:45:14.1728493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:14.1729034Z distilbert_output = self.distilbert( 2025-08-14T21:45:14.1729556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:14.1730073Z return self.transformer( 2025-08-14T21:45:14.1730571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:14.1731167Z layer_outputs = layer_module( 2025-08-14T21:45:14.1731638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:14.1732098Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:14.1732673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:45:14.1733229Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:45:14.1733838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:45:14.1734529Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:45:14.1735189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:14.1735684Z return forward_fn(*input_tensors) 2025-08-14T21:45:14.1736217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:45:14.1736733Z x = self.activation(x) 2025-08-14T21:45:14.1737142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:14.1737560Z return self.act(input) 2025-08-14T21:45:14.1737716Z 2025-08-14T21:45:14.1737818Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1738084Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1738330Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1738578Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1738828Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1739064Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1739306Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1739548Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1739823Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1740270Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1740673Z return mod(**inputs) 2025-08-14T21:45:14.1741159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:14.1741676Z distilbert_output = self.distilbert( 2025-08-14T21:45:14.1742198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:14.1742708Z return self.transformer( 2025-08-14T21:45:14.1743258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:14.1743764Z layer_outputs = layer_module( 2025-08-14T21:45:14.1744201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:14.1744650Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:14.1745168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:45:14.1750392Z sa_output = self.attention( 2025-08-14T21:45:14.1750970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:45:14.1751569Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:14.1751806Z 2025-08-14T21:45:14.1751909Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1752163Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1752447Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1752887Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1753295Z return mod(**inputs) 2025-08-14T21:45:14.1753783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:14.1754314Z distilbert_output = self.distilbert( 2025-08-14T21:45:14.1754893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:14.1755405Z return self.transformer( 2025-08-14T21:45:14.1755941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:14.1756453Z layer_outputs = layer_module( 2025-08-14T21:45:14.1756885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:14.1757341Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:14.1757862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:45:14.1758422Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:45:14.1758984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:45:14.1759667Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:45:14.1760470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:14.1760970Z return forward_fn(*input_tensors) 2025-08-14T21:45:14.1761577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:45:14.1762100Z x = self.activation(x) 2025-08-14T21:45:14.1762502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:14.1762917Z return self.act(input) 2025-08-14T21:45:14.1763068Z 2025-08-14T21:45:14.1763170Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1763435Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1763684Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1763940Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1764184Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1764430Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1764677Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1764920Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1765274Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1765722Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1766126Z return mod(**inputs) 2025-08-14T21:45:14.1766611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:14.1767129Z distilbert_output = self.distilbert( 2025-08-14T21:45:14.1767647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:14.1768165Z return self.transformer( 2025-08-14T21:45:14.1768664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:14.1769170Z layer_outputs = layer_module( 2025-08-14T21:45:14.1769604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:14.1770052Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:14.1770562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:45:14.1771072Z sa_output = self.attention( 2025-08-14T21:45:14.1771567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:45:14.1772149Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:14.1772442Z 2025-08-14T21:45:14.1772540Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1772800Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1773084Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1773553Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1773958Z return mod(**inputs) 2025-08-14T21:45:14.1778657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:14.1779243Z distilbert_output = self.distilbert( 2025-08-14T21:45:14.1779763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:14.1780276Z return self.transformer( 2025-08-14T21:45:14.1780774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:14.1781291Z layer_outputs = layer_module( 2025-08-14T21:45:14.1781720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:14.1782177Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:14.1782699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:45:14.1783257Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:45:14.1783816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:45:14.1784494Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:45:14.1785146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:14.1785639Z return forward_fn(*input_tensors) 2025-08-14T21:45:14.1786153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:45:14.1786670Z x = self.activation(x) 2025-08-14T21:45:14.1787084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:14.1787552Z return self.act(input) 2025-08-14T21:45:14.1787699Z 2025-08-14T21:45:14.1787796Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1788054Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1788299Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1788544Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1788859Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1789165Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1789455Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1789697Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1790000Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1790446Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1790852Z return mod(**inputs) 2025-08-14T21:45:14.1791348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:14.1791876Z distilbert_output = self.distilbert( 2025-08-14T21:45:14.1792385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:14.1792896Z return self.transformer( 2025-08-14T21:45:14.1793392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:14.1793895Z layer_outputs = layer_module( 2025-08-14T21:45:14.1794378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:14.1794835Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:14.1795382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:45:14.1795886Z sa_output = self.attention( 2025-08-14T21:45:14.1796389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:45:14.1796971Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:14.1797206Z 2025-08-14T21:45:14.1797312Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1797556Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1797841Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1798285Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1798682Z return mod(**inputs) 2025-08-14T21:45:14.1799168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:14.1799694Z distilbert_output = self.distilbert( 2025-08-14T21:45:14.1800216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:14.1800723Z return self.transformer( 2025-08-14T21:45:14.1801273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:14.1801805Z layer_outputs = layer_module( 2025-08-14T21:45:14.1802233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:14.1802689Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:14.1803212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:45:14.1808060Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:45:14.1808615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:45:14.1809346Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:45:14.1810007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:14.1810511Z return forward_fn(*input_tensors) 2025-08-14T21:45:14.1811022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:45:14.1811534Z x = self.activation(x) 2025-08-14T21:45:14.1811941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:14.1812360Z return self.act(input) 2025-08-14T21:45:14.1812506Z 2025-08-14T21:45:14.1812605Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1812861Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1813113Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1813353Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1813607Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1813860Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1814095Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1814345Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1814632Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1815074Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1815478Z return mod(**inputs) 2025-08-14T21:45:14.1815997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:14.1816523Z distilbert_output = self.distilbert( 2025-08-14T21:45:14.1817038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:14.1817575Z return self.transformer( 2025-08-14T21:45:14.1818155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:14.1818705Z layer_outputs = layer_module( 2025-08-14T21:45:14.1819134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:14.1819580Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:14.1820097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:45:14.1820603Z sa_output = self.attention( 2025-08-14T21:45:14.1821098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:45:14.1821692Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:14.1821928Z 2025-08-14T21:45:14.1822030Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1822279Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1822566Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1823009Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1823406Z return mod(**inputs) 2025-08-14T21:45:14.1823892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:14.1824414Z distilbert_output = self.distilbert( 2025-08-14T21:45:14.1824936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:14.1825437Z return self.transformer( 2025-08-14T21:45:14.1825927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:14.1826441Z layer_outputs = layer_module( 2025-08-14T21:45:14.1826914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:14.1827361Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:14.1827885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:45:14.1828444Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:45:14.1828999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:45:14.1829674Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:45:14.1830326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:14.1830834Z return forward_fn(*input_tensors) 2025-08-14T21:45:14.1831349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:45:14.1831862Z x = self.activation(x) 2025-08-14T21:45:14.1840650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:14.1841302Z return self.act(input) 2025-08-14T21:45:14.1841472Z 2025-08-14T21:45:14.1841581Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1841883Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1842170Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1842491Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1842781Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1843061Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1843374Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1843654Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1843943Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1844390Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1844808Z return mod(**inputs) 2025-08-14T21:45:14.1845299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:14.1845823Z distilbert_output = self.distilbert( 2025-08-14T21:45:14.1846340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:14.1849128Z return self.transformer( 2025-08-14T21:45:14.1849783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:14.1850307Z layer_outputs = layer_module( 2025-08-14T21:45:14.1850736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:14.1851190Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:14.1851712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:45:14.1852222Z sa_output = self.attention( 2025-08-14T21:45:14.1852721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:45:14.1853306Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:14.1853545Z 2025-08-14T21:45:14.1853652Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1853903Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1854184Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1854630Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1855025Z return mod(**inputs) 2025-08-14T21:45:14.1855611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:14.1856144Z distilbert_output = self.distilbert( 2025-08-14T21:45:14.1856671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:14.1857179Z return self.transformer( 2025-08-14T21:45:14.1857675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:14.1858190Z layer_outputs = layer_module( 2025-08-14T21:45:14.1858614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:14.1859071Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:14.1859593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:45:14.1860154Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:45:14.1860709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:45:14.1861464Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:45:14.1862183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:14.1862720Z return forward_fn(*input_tensors) 2025-08-14T21:45:14.1863229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:45:14.1863777Z x = self.activation(x) 2025-08-14T21:45:14.1864182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:14.1864602Z return self.act(input) 2025-08-14T21:45:14.1864742Z 2025-08-14T21:45:14.1864840Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1865096Z cudagraph partition due to non gpu ops 2025-08-14T21:45:14.1865380Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1865817Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1866216Z return mod(**inputs) 2025-08-14T21:45:14.1866709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1061, in forward 2025-08-14T21:45:14.1867266Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:45:14.1867468Z 2025-08-14T21:45:14.1867598Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:14.1868040Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:14.1868438Z return mod(**inputs) 2025-08-14T21:45:14.1868917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1062, in forward 2025-08-14T21:45:14.1869458Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:45:14.1869652Z 2025-08-14T21:45:18.7483915Z Compilation time (from dynamo_timed): 12.368971146 2025-08-14T21:45:18.7487610Z pass 2025-08-14T21:45:18.7488240Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:18.7490032Z TIMING: _recursive_pre_grad_passes:0.02825 _recursive_joint_graph_passes:0.35595 _recursive_post_grad_passes:0.07715 async_compile.wait:0.85278 code_gen:4.189 inductor_compile:7.15976 backend_compile:10.47313 gc:0.00015 entire_frame_compile:12.36897 total_wall_time:12.36897 2025-08-14T21:45:18.7492101Z STATS: call_* op count: 161 | FakeTensorMode.__torch_dispatch__:12745 | FakeTensor.__torch_dispatch__:2105 | ProxyTorchDispatchMode.__torch_dispatch__:2842 2025-08-14T21:45:18.7497601Z Dynamo produced 1 graphs covering 161 ops with 0 graph breaks (0 unique) 2025-08-14T21:45:24.9143350Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:45:24.9144459Z from pkg_resources import resource_filename 2025-08-14T21:45:25.7146058Z 2025-08-14T21:45:28.6090439Z loading model: 0it [00:00, ?it/s]`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`. 2025-08-14T21:45:28.6091760Z WARNING:transformers.modeling_utils:`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`. 2025-08-14T21:45:28.6511946Z 2025-08-14T21:45:28.6512289Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:45:28.6528047Z cpu eval DistillGPT2 2025-08-14T21:45:29.2810832Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:29.5732655Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:29.8833858Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:40.5873276Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5873656Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5873940Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5874413Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5874661Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5874927Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5875249Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5875496Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5875735Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5875999Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5876253Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5876555Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.5877148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.5877791Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.5878309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.5878787Z outputs = block( 2025-08-14T21:45:40.5879191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.5879642Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.5880126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5880597Z return func(*args, **kwargs) 2025-08-14T21:45:40.5881060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:40.5881633Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:40.5882176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5882640Z return func(*args, **kwargs) 2025-08-14T21:45:40.5883098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:40.5883604Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:40.5884163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:45:40.5884774Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:40.5885012Z 2025-08-14T21:45:40.5885242Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.5885774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.5886279Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.5886816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.5887285Z outputs = block( 2025-08-14T21:45:40.5887684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.5888148Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.5888624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5889095Z return func(*args, **kwargs) 2025-08-14T21:45:40.5889555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:40.5890050Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:40.5890537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5891003Z return func(*args, **kwargs) 2025-08-14T21:45:40.5891461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:40.5896230Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:40.5896830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:45:40.5897400Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:45:40.5897676Z 2025-08-14T21:45:40.5897780Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5898044Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5898333Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.5898845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.5899339Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.5899824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.5900280Z outputs = block( 2025-08-14T21:45:40.5900685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.5901149Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.5901623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5902088Z return func(*args, **kwargs) 2025-08-14T21:45:40.5902542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:45:40.5903060Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:45:40.5903561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:45:40.5904045Z hidden_states = self.act(hidden_states) 2025-08-14T21:45:40.5904482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:45:40.5905053Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:45:40.5905351Z 2025-08-14T21:45:40.5905450Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5905703Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5905952Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5906195Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5906504Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5906864Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5907150Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.5907664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.5908160Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.5908650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.5909112Z outputs = block( 2025-08-14T21:45:40.5909514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.5909967Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.5910439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5910943Z return func(*args, **kwargs) 2025-08-14T21:45:40.5911404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:40.5911908Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:40.5912390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5912845Z return func(*args, **kwargs) 2025-08-14T21:45:40.5913301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:40.5913835Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:40.5914384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:45:40.5915040Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:40.5915306Z 2025-08-14T21:45:40.5915436Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.5915958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.5916445Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.5916927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.5917389Z outputs = block( 2025-08-14T21:45:40.5917788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.5918236Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.5918710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5919175Z return func(*args, **kwargs) 2025-08-14T21:45:40.5919629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:40.5920128Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:40.5920614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5929591Z return func(*args, **kwargs) 2025-08-14T21:45:40.5930182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:40.5930861Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:40.5931627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:45:40.5932314Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:45:40.5932533Z 2025-08-14T21:45:40.5932641Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5932907Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5933205Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.5933781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.5934279Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.5934761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.5935225Z outputs = block( 2025-08-14T21:45:40.5937758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.5938214Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.5938694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5939155Z return func(*args, **kwargs) 2025-08-14T21:45:40.5939617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:45:40.5940132Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:45:40.5940640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:45:40.5941115Z hidden_states = self.act(hidden_states) 2025-08-14T21:45:40.5941547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:45:40.5942117Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:45:40.5942442Z 2025-08-14T21:45:40.5942542Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5942803Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5943055Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5943332Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5943570Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5943817Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5944099Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.5944615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.5945106Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.5945590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.5946050Z outputs = block( 2025-08-14T21:45:40.5946442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.5946900Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.5947375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5947846Z return func(*args, **kwargs) 2025-08-14T21:45:40.5948306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:40.5949175Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:40.5949664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5950214Z return func(*args, **kwargs) 2025-08-14T21:45:40.5950702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:40.5951212Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:40.5951766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:45:40.5952370Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:40.5952613Z 2025-08-14T21:45:40.5952747Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.5953375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.5953869Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.5954415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.5954895Z outputs = block( 2025-08-14T21:45:40.5955302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.5955752Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.5956239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5956714Z return func(*args, **kwargs) 2025-08-14T21:45:40.5957167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:40.5957659Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:40.5958142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5958605Z return func(*args, **kwargs) 2025-08-14T21:45:40.5959054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:40.5959560Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:40.5960108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:45:40.5960722Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:45:40.5960923Z 2025-08-14T21:45:40.5961022Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5961382Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5975953Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.5976879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.5977409Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.5977916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.5978387Z outputs = block( 2025-08-14T21:45:40.5978924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.5979454Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.5979938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5980418Z return func(*args, **kwargs) 2025-08-14T21:45:40.5980889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:45:40.5981414Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:45:40.5981923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:45:40.5982411Z hidden_states = self.act(hidden_states) 2025-08-14T21:45:40.5982855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:45:40.5983456Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:45:40.5983785Z 2025-08-14T21:45:40.5983889Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5984157Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5984412Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5984658Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5984909Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5985159Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.5985435Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.5986070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.5986579Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.5987074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.5987538Z outputs = block( 2025-08-14T21:45:40.5987942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.5988402Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.5988885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5989351Z return func(*args, **kwargs) 2025-08-14T21:45:40.5989821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:40.5990326Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:40.5990805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.5991279Z return func(*args, **kwargs) 2025-08-14T21:45:40.5991729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:40.5992233Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:40.5992785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:45:40.5997644Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:40.5997918Z 2025-08-14T21:45:40.5998063Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.5998591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.5999082Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.5999573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.6000043Z outputs = block( 2025-08-14T21:45:40.6000439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.6000893Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.6001463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6001934Z return func(*args, **kwargs) 2025-08-14T21:45:40.6002389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:40.6002886Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:40.6003373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6003830Z return func(*args, **kwargs) 2025-08-14T21:45:40.6004291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:40.6004797Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:40.6005359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:45:40.6005929Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:45:40.6006143Z 2025-08-14T21:45:40.6006245Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6006545Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6006836Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.6007353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.6007967Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.6008513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.6008981Z outputs = block( 2025-08-14T21:45:40.6009382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.6009827Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.6010305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6010773Z return func(*args, **kwargs) 2025-08-14T21:45:40.6011232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:45:40.6011742Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:45:40.6012257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:45:40.6012797Z hidden_states = self.act(hidden_states) 2025-08-14T21:45:40.6013232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:45:40.6013810Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:45:40.6014120Z 2025-08-14T21:45:40.6014220Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6014508Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6014749Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6015003Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6015250Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6015516Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6015798Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.6016323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.6016818Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.6017293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.6017762Z outputs = block( 2025-08-14T21:45:40.6018161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.6018605Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.6019082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6019546Z return func(*args, **kwargs) 2025-08-14T21:45:40.6020005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:40.6020489Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:40.6020973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6021437Z return func(*args, **kwargs) 2025-08-14T21:45:40.6021889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:40.6026626Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:40.6027196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:45:40.6027804Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:40.6028040Z 2025-08-14T21:45:40.6028173Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.6028705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.6029259Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.6029745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.6030201Z outputs = block( 2025-08-14T21:45:40.6030613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.6031064Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.6031533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6032008Z return func(*args, **kwargs) 2025-08-14T21:45:40.6032468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:40.6032967Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:40.6033443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6033918Z return func(*args, **kwargs) 2025-08-14T21:45:40.6034372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:40.6034863Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:40.6035415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:45:40.6035987Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:45:40.6036213Z 2025-08-14T21:45:40.6036320Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6036568Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6036926Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.6037519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.6038008Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.6038490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.6038951Z outputs = block( 2025-08-14T21:45:40.6039354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.6039792Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.6040263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6040732Z return func(*args, **kwargs) 2025-08-14T21:45:40.6041177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:45:40.6041812Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:45:40.6042317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:45:40.6042799Z hidden_states = self.act(hidden_states) 2025-08-14T21:45:40.6043228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:45:40.6043792Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:45:40.6044093Z 2025-08-14T21:45:40.6044192Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6044445Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6044691Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6044937Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6045181Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6045428Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6045761Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.6046337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.6046823Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.6047308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.6047770Z outputs = block( 2025-08-14T21:45:40.6048165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.6048607Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.6049434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6049905Z return func(*args, **kwargs) 2025-08-14T21:45:40.6050353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:40.6050848Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:40.6055489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6055954Z return func(*args, **kwargs) 2025-08-14T21:45:40.6056402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:40.6056905Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:40.6057455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:45:40.6058144Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:40.6058378Z 2025-08-14T21:45:40.6058512Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.6059075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.6059573Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.6060069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.6060529Z outputs = block( 2025-08-14T21:45:40.6060941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.6061390Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.6061856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6062324Z return func(*args, **kwargs) 2025-08-14T21:45:40.6062778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:40.6063272Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:40.6063743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6064209Z return func(*args, **kwargs) 2025-08-14T21:45:40.6064659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:40.6065147Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:40.6065694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:45:40.6066385Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:45:40.6066588Z 2025-08-14T21:45:40.6066696Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6066943Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6067221Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.6067731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:40.6068221Z transformer_outputs = self.transformer( 2025-08-14T21:45:40.6068768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:40.6069228Z outputs = block( 2025-08-14T21:45:40.6069614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:40.6070057Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:40.6070522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:40.6070977Z return func(*args, **kwargs) 2025-08-14T21:45:40.6071428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:45:40.6071934Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:45:40.6072441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:45:40.6072914Z hidden_states = self.act(hidden_states) 2025-08-14T21:45:40.6073353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:45:40.6073909Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:45:40.6074198Z 2025-08-14T21:45:40.6074298Z cudagraph partition due to non gpu ops 2025-08-14T21:45:40.6074576Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:40.6075090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1207, in forward 2025-08-14T21:45:40.6075636Z logits = self.lm_head(hidden_states[:, slice_indices, :]) 2025-08-14T21:45:40.6075844Z 2025-08-14T21:45:45.9958568Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:45.9959352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-08-14T21:45:45.9960008Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-08-14T21:45:45.9960619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-08-14T21:45:45.9961337Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-08-14T21:45:45.9961667Z 2025-08-14T21:45:47.4533923Z Compilation time (from dynamo_timed): 15.831526818 2025-08-14T21:45:47.4677615Z pass 2025-08-14T21:45:47.4679116Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:47.4689183Z TIMING: gc:0.0035 entire_frame_compile:15.83153 _recursive_pre_grad_passes:0.0384 _recursive_joint_graph_passes:0.31202 _recursive_post_grad_passes:0.07683 async_compile.wait:1.72338 code_gen:5.9762 inductor_compile:9.29277 backend_compile:11.9021 total_wall_time:15.83153 2025-08-14T21:45:47.4690670Z STATS: call_* op count: 299 | FakeTensorMode.__torch_dispatch__:12355 | FakeTensor.__torch_dispatch__:2126 | ProxyTorchDispatchMode.__torch_dispatch__:2254 2025-08-14T21:45:47.4691291Z Dynamo produced 2 graphs covering 299 ops with 2 graph breaks (1 unique) 2025-08-14T21:45:53.6942008Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:45:53.6949446Z from pkg_resources import resource_filename 2025-08-14T21:45:54.4275178Z 2025-08-14T21:45:54.4286395Z loading model: 0it [00:00, ?it/s]If you want to use `ElectraForCausalLM` as a standalone, add `is_decoder=True.` 2025-08-14T21:45:54.4287310Z WARNING:transformers.models.electra.modeling_electra:If you want to use `ElectraForCausalLM` as a standalone, add `is_decoder=True.` 2025-08-14T21:45:54.9708141Z 2025-08-14T21:45:54.9708845Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:45:54.9726639Z cpu eval ElectraForCausalLM 2025-08-14T21:45:55.3180801Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:55.5136485Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:55.7038793Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:10.8882300Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8882721Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.8883308Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.8887998Z return mod(**inputs) 2025-08-14T21:46:10.8888560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.8889077Z outputs = self.electra( 2025-08-14T21:46:10.8889562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 797, in forward 2025-08-14T21:46:10.8890101Z hidden_states = self.embeddings_project(hidden_states) 2025-08-14T21:46:10.8890312Z 2025-08-14T21:46:10.8890417Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8890672Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8890941Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8891419Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8891670Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8891906Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8892149Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8892462Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8892700Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8892947Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8893240Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.8893685Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.8894102Z return mod(**inputs) 2025-08-14T21:46:10.8894589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.8895086Z outputs = self.electra( 2025-08-14T21:46:10.8895550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:10.8896045Z hidden_states = self.encoder( 2025-08-14T21:46:10.8896535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:10.8897023Z layer_outputs = layer_module( 2025-08-14T21:46:10.8897462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:10.8897922Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:10.8898510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:10.8899078Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:10.8899588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:10.8900080Z return forward_fn(*input_tensors) 2025-08-14T21:46:10.8900607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:10.8901207Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:10.8901763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:10.8902399Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:10.8902875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:10.8903302Z return self.act(input) 2025-08-14T21:46:10.8903451Z 2025-08-14T21:46:10.8903557Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8903817Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8904058Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8904307Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8904571Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8904815Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8905060Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8905308Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8905551Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8905792Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8906035Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8906311Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.8906766Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.8907172Z return mod(**inputs) 2025-08-14T21:46:10.8907647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.8908138Z outputs = self.electra( 2025-08-14T21:46:10.8908605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:10.8909124Z hidden_states = self.encoder( 2025-08-14T21:46:10.8909605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:10.8910123Z layer_outputs = layer_module( 2025-08-14T21:46:10.8910553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:10.8911010Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:10.8911503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:10.8912008Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:10.8912504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:10.8921580Z return forward_fn(*input_tensors) 2025-08-14T21:46:10.8922165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:10.8922760Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:10.8923319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:10.8923866Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:10.8924338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:10.8924774Z return self.act(input) 2025-08-14T21:46:10.8924912Z 2025-08-14T21:46:10.8925018Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8925263Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8925518Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8925761Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8926005Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8926249Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8926493Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8926740Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8927046Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8927363Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8927611Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8928009Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.8928466Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.8928883Z return mod(**inputs) 2025-08-14T21:46:10.8929343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.8929843Z outputs = self.electra( 2025-08-14T21:46:10.8930305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:10.8930794Z hidden_states = self.encoder( 2025-08-14T21:46:10.8931275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:10.8931776Z layer_outputs = layer_module( 2025-08-14T21:46:10.8932209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:10.8932662Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:10.8933152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:10.8933663Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:10.8934159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:10.8934637Z return forward_fn(*input_tensors) 2025-08-14T21:46:10.8935217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:10.8935806Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:10.8936383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:10.8936914Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:10.8937390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:10.8937810Z return self.act(input) 2025-08-14T21:46:10.8937947Z 2025-08-14T21:46:10.8938052Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8938294Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8938541Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8938782Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8939023Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8939262Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8939509Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8939747Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8939990Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8940239Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8940480Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8940766Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.8941216Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.8941617Z return mod(**inputs) 2025-08-14T21:46:10.8942165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.8942713Z outputs = self.electra( 2025-08-14T21:46:10.8943204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:10.8943702Z hidden_states = self.encoder( 2025-08-14T21:46:10.8944201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:10.8944689Z layer_outputs = layer_module( 2025-08-14T21:46:10.8945176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:10.8945626Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:10.8946111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:10.8946660Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:10.8947156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:10.8947651Z return forward_fn(*input_tensors) 2025-08-14T21:46:10.8948167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:10.8949054Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:10.8949608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:10.8950150Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:10.8950669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:10.8951223Z return self.act(input) 2025-08-14T21:46:10.8951362Z 2025-08-14T21:46:10.8951465Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8951708Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8951955Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8952196Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8952515Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8952751Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8952999Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8953310Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8953549Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8953789Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8954036Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8954311Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.8954757Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.8955162Z return mod(**inputs) 2025-08-14T21:46:10.8955622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.8956118Z outputs = self.electra( 2025-08-14T21:46:10.8962821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:10.8963313Z hidden_states = self.encoder( 2025-08-14T21:46:10.8963785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:10.8964282Z layer_outputs = layer_module( 2025-08-14T21:46:10.8964723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:10.8965175Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:10.8965668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:10.8966175Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:10.8966670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:10.8967153Z return forward_fn(*input_tensors) 2025-08-14T21:46:10.8967683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:10.8968272Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:10.8968822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:10.8969425Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:10.8969905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:10.8970325Z return self.act(input) 2025-08-14T21:46:10.8970464Z 2025-08-14T21:46:10.8970565Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8970885Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8971146Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8971457Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8971696Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8971941Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8972192Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8972440Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8972686Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8972935Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8973179Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8973464Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.8973911Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.8974309Z return mod(**inputs) 2025-08-14T21:46:10.8974766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.8975254Z outputs = self.electra( 2025-08-14T21:46:10.8975715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:10.8976233Z hidden_states = self.encoder( 2025-08-14T21:46:10.8976709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:10.8977222Z layer_outputs = layer_module( 2025-08-14T21:46:10.8977652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:10.8978093Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:10.8978590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:10.8979099Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:10.8979599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:10.8980081Z return forward_fn(*input_tensors) 2025-08-14T21:46:10.8980609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:10.8981202Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:10.8981741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:10.8982281Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:10.8982756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:10.8983180Z return self.act(input) 2025-08-14T21:46:10.8983322Z 2025-08-14T21:46:10.8983418Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8983671Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8983921Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8984163Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8984412Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8984656Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8984905Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8985145Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8989641Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8989922Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8991304Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.8991611Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.8992072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.8992538Z return mod(**inputs) 2025-08-14T21:46:10.8993009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.8993511Z outputs = self.electra( 2025-08-14T21:46:10.8993983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:10.8994463Z hidden_states = self.encoder( 2025-08-14T21:46:10.8994941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:10.8995439Z layer_outputs = layer_module( 2025-08-14T21:46:10.8995858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:10.8996339Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:10.8996905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:10.8997414Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:10.8997900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:10.8998422Z return forward_fn(*input_tensors) 2025-08-14T21:46:10.8998944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:10.8999552Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:10.9000236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:10.9000796Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:10.9001340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:10.9001760Z return self.act(input) 2025-08-14T21:46:10.9001896Z 2025-08-14T21:46:10.9001993Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9002240Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9002492Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9002735Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9002984Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9003233Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9003470Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9003727Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9003975Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9004215Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9004462Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9004739Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.9005184Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.9005587Z return mod(**inputs) 2025-08-14T21:46:10.9006055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.9006551Z outputs = self.electra( 2025-08-14T21:46:10.9007017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:10.9007499Z hidden_states = self.encoder( 2025-08-14T21:46:10.9007979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:10.9008469Z layer_outputs = layer_module( 2025-08-14T21:46:10.9008947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:10.9009401Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:10.9009899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:10.9010404Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:10.9010899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:10.9011388Z return forward_fn(*input_tensors) 2025-08-14T21:46:10.9011910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:10.9012495Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:10.9013051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:10.9013596Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:10.9014076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:10.9018739Z return self.act(input) 2025-08-14T21:46:10.9018938Z 2025-08-14T21:46:10.9019042Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9019303Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9019554Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9019823Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9020071Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9020321Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9020558Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9020823Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9021068Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9021302Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9021543Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9021819Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.9022257Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.9022658Z return mod(**inputs) 2025-08-14T21:46:10.9023120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.9023605Z outputs = self.electra( 2025-08-14T21:46:10.9024065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:10.9024549Z hidden_states = self.encoder( 2025-08-14T21:46:10.9025032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:10.9025510Z layer_outputs = layer_module( 2025-08-14T21:46:10.9025942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:10.9026389Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:10.9026886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:10.9027386Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:10.9027882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:10.9028370Z return forward_fn(*input_tensors) 2025-08-14T21:46:10.9028970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:10.9029610Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:10.9030210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:10.9030747Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:10.9031217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:10.9031651Z return self.act(input) 2025-08-14T21:46:10.9031795Z 2025-08-14T21:46:10.9031892Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9032144Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9032392Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9032650Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9032893Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9033144Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9033397Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9033639Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9033876Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9034118Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9034365Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9034646Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.9035094Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.9035496Z return mod(**inputs) 2025-08-14T21:46:10.9035960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.9036446Z outputs = self.electra( 2025-08-14T21:46:10.9036942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:10.9037424Z hidden_states = self.encoder( 2025-08-14T21:46:10.9037921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:10.9038397Z layer_outputs = layer_module( 2025-08-14T21:46:10.9038822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:10.9039272Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:10.9039760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:10.9040262Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:10.9040754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:10.9041316Z return forward_fn(*input_tensors) 2025-08-14T21:46:10.9041853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:10.9042442Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:10.9042994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:10.9047770Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:10.9048284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:10.9049041Z return self.act(input) 2025-08-14T21:46:10.9049184Z 2025-08-14T21:46:10.9049289Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9049534Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9049784Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9050039Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9050276Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9050527Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9050781Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9051025Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9051271Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9051613Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9051860Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9052133Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.9052589Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.9053002Z return mod(**inputs) 2025-08-14T21:46:10.9053458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.9053951Z outputs = self.electra( 2025-08-14T21:46:10.9054423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:10.9054915Z hidden_states = self.encoder( 2025-08-14T21:46:10.9055391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:10.9055881Z layer_outputs = layer_module( 2025-08-14T21:46:10.9056313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:10.9056768Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:10.9057253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:10.9057826Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:10.9058385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:10.9058903Z return forward_fn(*input_tensors) 2025-08-14T21:46:10.9059436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:10.9060069Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:10.9060623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:10.9061148Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:10.9061617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:10.9062033Z return self.act(input) 2025-08-14T21:46:10.9062170Z 2025-08-14T21:46:10.9062278Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9062526Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9062769Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9063019Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9063253Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9063500Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9063745Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9063980Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9064223Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9064471Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9064705Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9064989Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.9065436Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.9065842Z return mod(**inputs) 2025-08-14T21:46:10.9066296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:46:10.9066787Z outputs = self.electra( 2025-08-14T21:46:10.9067255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:10.9067737Z hidden_states = self.encoder( 2025-08-14T21:46:10.9068222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:10.9068712Z layer_outputs = layer_module( 2025-08-14T21:46:10.9069189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:10.9069639Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:10.9070139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:10.9070641Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:10.9071136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:10.9071625Z return forward_fn(*input_tensors) 2025-08-14T21:46:10.9072150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:10.9081286Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:10.9082050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:10.9082759Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:10.9083383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:10.9083902Z return self.act(input) 2025-08-14T21:46:10.9084039Z 2025-08-14T21:46:10.9084138Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9084397Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9084653Z cudagraph partition due to non gpu ops 2025-08-14T21:46:10.9084971Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:10.9085420Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:10.9085853Z return mod(**inputs) 2025-08-14T21:46:10.9086312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1564, in forward 2025-08-14T21:46:10.9088996Z lm_loss = self.loss_function( 2025-08-14T21:46:10.9089467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-08-14T21:46:10.9090066Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-08-14T21:46:10.9090667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-08-14T21:46:10.9091299Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-08-14T21:46:10.9091624Z 2025-08-14T21:46:17.1140312Z Compilation time (from dynamo_timed): 19.887103342 2025-08-14T21:46:17.1213911Z pass 2025-08-14T21:46:17.1214346Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:17.1215468Z TIMING: _recursive_pre_grad_passes:0.05277 _recursive_joint_graph_passes:0.6455 _recursive_post_grad_passes:0.10448 async_compile.wait:0.99312 code_gen:5.71656 inductor_compile:9.38784 backend_compile:16.00805 gc:0.00037 entire_frame_compile:19.8871 total_wall_time:19.8871 2025-08-14T21:46:17.1216604Z STATS: call_* op count: 377 | FakeTensorMode.__torch_dispatch__:26896 | FakeTensor.__torch_dispatch__:3851 | ProxyTorchDispatchMode.__torch_dispatch__:6491 2025-08-14T21:46:17.1217226Z Dynamo produced 1 graphs covering 377 ops with 0 graph breaks (0 unique) 2025-08-14T21:46:23.2446362Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:46:23.2447478Z from pkg_resources import resource_filename 2025-08-14T21:46:23.9508883Z 2025-08-14T21:46:24.4410682Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:46:24.4411038Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:46:24.4439718Z cpu eval ElectraForQuestionAnswering 2025-08-14T21:46:24.7416162Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:24.8967143Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:25.0513882Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:40.2114132Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2114588Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2115086Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2115512Z return mod(**inputs) 2025-08-14T21:46:40.2116034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2116581Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2117107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 797, in forward 2025-08-14T21:46:40.2121804Z hidden_states = self.embeddings_project(hidden_states) 2025-08-14T21:46:40.2122078Z 2025-08-14T21:46:40.2122196Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2122452Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2122710Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2122975Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2123461Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2123706Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2123950Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2124257Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2124492Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2124734Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2125024Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2125472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2135391Z return mod(**inputs) 2025-08-14T21:46:40.2136052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2136599Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2137127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:40.2137652Z hidden_states = self.encoder( 2025-08-14T21:46:40.2138156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:40.2138659Z layer_outputs = layer_module( 2025-08-14T21:46:40.2139099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:40.2139574Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:40.2140089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:40.2140604Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:40.2141116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:40.2141617Z return forward_fn(*input_tensors) 2025-08-14T21:46:40.2142172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:40.2142772Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:40.2143346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:40.2144083Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:40.2144569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:40.2145018Z return self.act(input) 2025-08-14T21:46:40.2145173Z 2025-08-14T21:46:40.2145284Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2145554Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2145802Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2146057Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2155504Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2155797Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2156093Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2156383Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2156666Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2156914Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2157243Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2157533Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2157986Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2158403Z return mod(**inputs) 2025-08-14T21:46:40.2158878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2159401Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2159919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:40.2160491Z hidden_states = self.encoder( 2025-08-14T21:46:40.2162767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:40.2163308Z layer_outputs = layer_module( 2025-08-14T21:46:40.2163744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:40.2164204Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:40.2164700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:40.2165217Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:40.2165721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:40.2166217Z return forward_fn(*input_tensors) 2025-08-14T21:46:40.2166747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:40.2167350Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:40.2167912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:40.2168464Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:40.2168941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:40.2169375Z return self.act(input) 2025-08-14T21:46:40.2169516Z 2025-08-14T21:46:40.2169624Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2169873Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2170127Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2170385Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2170645Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2170889Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2171142Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2171394Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2171635Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2171879Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2172132Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2172485Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2172947Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2173360Z return mod(**inputs) 2025-08-14T21:46:40.2173836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2174352Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2174865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:40.2175453Z hidden_states = self.encoder( 2025-08-14T21:46:40.2175993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:40.2176500Z layer_outputs = layer_module( 2025-08-14T21:46:40.2176942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:40.2177406Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:40.2177902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:40.2178412Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:40.2178917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:40.2179430Z return forward_fn(*input_tensors) 2025-08-14T21:46:40.2179965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:40.2180588Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:40.2181138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:40.2181675Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:40.2182153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:40.2182584Z return self.act(input) 2025-08-14T21:46:40.2182727Z 2025-08-14T21:46:40.2182833Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2183082Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2183336Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2183583Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2183825Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2184079Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2184324Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2184568Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2184821Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2185069Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2185316Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2185606Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2186061Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2186471Z return mod(**inputs) 2025-08-14T21:46:40.2186929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2187454Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2187971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:40.2188471Z hidden_states = self.encoder( 2025-08-14T21:46:40.2188945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:40.2189444Z layer_outputs = layer_module( 2025-08-14T21:46:40.2194204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:40.2194665Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:40.2195170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:40.2195682Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:40.2196186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:40.2196736Z return forward_fn(*input_tensors) 2025-08-14T21:46:40.2197270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:40.2197860Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:40.2198418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:40.2198964Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:40.2199431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:40.2199856Z return self.act(input) 2025-08-14T21:46:40.2200001Z 2025-08-14T21:46:40.2200098Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2200350Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2200594Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2200862Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2201109Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2201411Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2201661Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2201935Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2202171Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2202415Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2202659Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2202941Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2203386Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2203793Z return mod(**inputs) 2025-08-14T21:46:40.2204329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2204894Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2205415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:40.2205911Z hidden_states = self.encoder( 2025-08-14T21:46:40.2206399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:40.2206883Z layer_outputs = layer_module( 2025-08-14T21:46:40.2207315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:40.2207776Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:40.2208267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:40.2208781Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:40.2209285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:40.2209781Z return forward_fn(*input_tensors) 2025-08-14T21:46:40.2210311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:40.2210911Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:40.2211519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:40.2212061Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:40.2212529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:40.2212957Z return self.act(input) 2025-08-14T21:46:40.2213095Z 2025-08-14T21:46:40.2213205Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2213451Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2213702Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2213958Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2214204Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2214447Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2214695Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2214944Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2215184Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2215438Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2215684Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2215958Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2216412Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2216815Z return mod(**inputs) 2025-08-14T21:46:40.2217274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2217791Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2218345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:40.2223090Z hidden_states = self.encoder( 2025-08-14T21:46:40.2223655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:40.2224155Z layer_outputs = layer_module( 2025-08-14T21:46:40.2224587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:40.2225040Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:40.2225534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:40.2226046Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:40.2226554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:40.2227045Z return forward_fn(*input_tensors) 2025-08-14T21:46:40.2227576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:40.2228189Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:40.2228741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:40.2229285Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:40.2229759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:40.2230183Z return self.act(input) 2025-08-14T21:46:40.2230321Z 2025-08-14T21:46:40.2230420Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2230676Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2230926Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2231164Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2231405Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2231653Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2231892Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2232135Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2232379Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2232671Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2232915Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2233195Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2233735Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2234170Z return mod(**inputs) 2025-08-14T21:46:40.2234641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2235166Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2235678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:40.2236179Z hidden_states = self.encoder( 2025-08-14T21:46:40.2236661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:40.2237165Z layer_outputs = layer_module( 2025-08-14T21:46:40.2237590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:40.2238040Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:40.2238540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:40.2239049Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:40.2239545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:40.2240075Z return forward_fn(*input_tensors) 2025-08-14T21:46:40.2240609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:40.2241222Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:40.2241871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:40.2242416Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:40.2242895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:40.2243314Z return self.act(input) 2025-08-14T21:46:40.2243458Z 2025-08-14T21:46:40.2243559Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2243817Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2244064Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2244319Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2244567Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2244815Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2245055Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2245305Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2245555Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2245799Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2246050Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2246334Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2246782Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2247188Z return mod(**inputs) 2025-08-14T21:46:40.2247658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2252800Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2253314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:40.2253819Z hidden_states = self.encoder( 2025-08-14T21:46:40.2255452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:40.2255953Z layer_outputs = layer_module( 2025-08-14T21:46:40.2256391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:40.2256842Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:40.2257348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:40.2257855Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:40.2258361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:40.2258854Z return forward_fn(*input_tensors) 2025-08-14T21:46:40.2259377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:40.2259973Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:40.2260526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:40.2261067Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:40.2261533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:40.2261957Z return self.act(input) 2025-08-14T21:46:40.2262099Z 2025-08-14T21:46:40.2262201Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2262596Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2262899Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2263143Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2263387Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2263666Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2263912Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2264157Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2264399Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2264646Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2264896Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2265174Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2265627Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2266032Z return mod(**inputs) 2025-08-14T21:46:40.2266511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2267030Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2267547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:40.2268046Z hidden_states = self.encoder( 2025-08-14T21:46:40.2268537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:40.2269031Z layer_outputs = layer_module( 2025-08-14T21:46:40.2269460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:40.2269915Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:40.2270412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:40.2270921Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:40.2271429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:40.2271917Z return forward_fn(*input_tensors) 2025-08-14T21:46:40.2272446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:40.2273090Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:40.2273645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:40.2274188Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:40.2274656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:40.2275083Z return self.act(input) 2025-08-14T21:46:40.2275223Z 2025-08-14T21:46:40.2275329Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2275579Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2275826Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2276076Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2276317Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2276568Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2285115Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2285414Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2285692Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2285985Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2286273Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2286602Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2287191Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2287723Z return mod(**inputs) 2025-08-14T21:46:40.2288188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2288734Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2289243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:40.2289767Z hidden_states = self.encoder( 2025-08-14T21:46:40.2290244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:40.2290733Z layer_outputs = layer_module( 2025-08-14T21:46:40.2291161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:40.2291697Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:40.2292233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:40.2292747Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:40.2293258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:40.2293742Z return forward_fn(*input_tensors) 2025-08-14T21:46:40.2294277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:40.2294878Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:40.2295431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:40.2295969Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:40.2296447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:40.2296879Z return self.act(input) 2025-08-14T21:46:40.2297019Z 2025-08-14T21:46:40.2297131Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2297375Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2297624Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2297874Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2298116Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2298359Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2298602Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2298899Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2299147Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2299399Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2299634Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2299921Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2300371Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2300774Z return mod(**inputs) 2025-08-14T21:46:40.2301233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2301759Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2302268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:40.2302751Z hidden_states = self.encoder( 2025-08-14T21:46:40.2303232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:40.2303717Z layer_outputs = layer_module( 2025-08-14T21:46:40.2304144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:40.2304590Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:40.2305085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:40.2305648Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:40.2306270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:40.2306795Z return forward_fn(*input_tensors) 2025-08-14T21:46:40.2307324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:40.2307920Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:40.2308461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:40.2309000Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:40.2309477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:40.2309903Z return self.act(input) 2025-08-14T21:46:40.2310043Z 2025-08-14T21:46:40.2310141Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2310413Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2310688Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2310929Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2311187Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2311437Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2311678Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2311927Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2312173Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2312421Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2312659Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2312945Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2313395Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2313794Z return mod(**inputs) 2025-08-14T21:46:40.2314268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:40.2314845Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:40.2315366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:40.2315851Z hidden_states = self.encoder( 2025-08-14T21:46:40.2316379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:40.2316884Z layer_outputs = layer_module( 2025-08-14T21:46:40.2317306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:40.2317760Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:40.2318259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:40.2318779Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:40.2319271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:40.2319779Z return forward_fn(*input_tensors) 2025-08-14T21:46:40.2326684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:40.2327293Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:40.2327842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:40.2328394Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:40.2328880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:40.2329302Z return self.act(input) 2025-08-14T21:46:40.2329488Z 2025-08-14T21:46:40.2329586Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2329839Z cudagraph partition due to non gpu ops 2025-08-14T21:46:40.2330125Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2330588Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2330996Z return mod(**inputs) 2025-08-14T21:46:40.2331464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1348, in forward 2025-08-14T21:46:40.2331988Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:46:40.2332193Z 2025-08-14T21:46:40.2332321Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:40.2332768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:40.2333177Z return mod(**inputs) 2025-08-14T21:46:40.2333636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1349, in forward 2025-08-14T21:46:40.2334160Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:46:40.2334349Z 2025-08-14T21:46:45.1252977Z Compilation time (from dynamo_timed): 18.66441759 2025-08-14T21:46:45.1253346Z pass 2025-08-14T21:46:45.1253692Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:45.1254711Z TIMING: _recursive_pre_grad_passes:0.05214 _recursive_joint_graph_passes:0.63541 _recursive_post_grad_passes:0.1141 async_compile.wait:0.00446 code_gen:4.19735 inductor_compile:8.19353 backend_compile:14.78156 gc:0.0003 entire_frame_compile:18.66442 total_wall_time:18.66442 2025-08-14T21:46:45.1255911Z STATS: call_* op count: 378 | FakeTensorMode.__torch_dispatch__:26743 | FakeTensor.__torch_dispatch__:3868 | ProxyTorchDispatchMode.__torch_dispatch__:6518 2025-08-14T21:46:45.1256531Z Dynamo produced 1 graphs covering 378 ops with 0 graph breaks (0 unique) 2025-08-14T21:46:51.3533888Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:46:51.3535005Z from pkg_resources import resource_filename 2025-08-14T21:46:52.0717797Z 2025-08-14T21:46:54.5243972Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:46:54.5244322Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:46:54.5256898Z cpu eval GPT2ForSequenceClassification 2025-08-14T21:46:55.7293624Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:56.3442512Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:56.9703052Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:09.8971908Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8972289Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8972595Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8972927Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8973217Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8973463Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8973736Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8973992Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8974258Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8974502Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8974746Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8974996Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8975290Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.8975846Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.8976553Z return mod(**inputs) 2025-08-14T21:47:09.8977039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1509, in forward 2025-08-14T21:47:09.8977667Z last_non_pad_token = (token_indices * non_pad_mask).argmax(-1) 2025-08-14T21:47:09.8977899Z 2025-08-14T21:47:09.8977997Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8978251Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8978493Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8978737Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8978978Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.8979256Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.8979708Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.8980119Z return mod(**inputs) 2025-08-14T21:47:09.8980631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.8981144Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.8981644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.8982123Z outputs = block( 2025-08-14T21:47:09.8982532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.8982990Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.8983475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.8983946Z return func(*args, **kwargs) 2025-08-14T21:47:09.8984415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.8984970Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.8985474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.8985948Z return func(*args, **kwargs) 2025-08-14T21:47:09.8986421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.8986939Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.8987611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:09.8988220Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:09.8988467Z 2025-08-14T21:47:09.8988678Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.8989136Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.8989553Z return mod(**inputs) 2025-08-14T21:47:09.8990009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.8994796Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.8995305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.8995778Z outputs = block( 2025-08-14T21:47:09.8996203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.8996666Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.8997152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.8997613Z return func(*args, **kwargs) 2025-08-14T21:47:09.8998078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.8998625Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.8999116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.8999615Z return func(*args, **kwargs) 2025-08-14T21:47:09.9000074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9000593Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9001148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:09.9001799Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:09.9002015Z 2025-08-14T21:47:09.9002118Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9002380Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9002666Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9003121Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9003530Z return mod(**inputs) 2025-08-14T21:47:09.9003974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9004478Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9005041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9005565Z outputs = block( 2025-08-14T21:47:09.9005959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9006413Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9006893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9007363Z return func(*args, **kwargs) 2025-08-14T21:47:09.9007831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:09.9008346Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:09.9008862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:09.9009343Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:09.9009852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:09.9010429Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:09.9010731Z 2025-08-14T21:47:09.9010843Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9011094Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9011351Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9011607Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9011847Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9012098Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9012381Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9012833Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9013247Z return mod(**inputs) 2025-08-14T21:47:09.9013707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9014216Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9014706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9015174Z outputs = block( 2025-08-14T21:47:09.9015572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9016055Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9016522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9017017Z return func(*args, **kwargs) 2025-08-14T21:47:09.9017474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9017967Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9018451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9018918Z return func(*args, **kwargs) 2025-08-14T21:47:09.9023645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9024153Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9024720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:09.9025331Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:09.9025568Z 2025-08-14T21:47:09.9025714Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9026210Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9026623Z return mod(**inputs) 2025-08-14T21:47:09.9027073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9027564Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9028049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9028515Z outputs = block( 2025-08-14T21:47:09.9028919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9029369Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9029849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9030319Z return func(*args, **kwargs) 2025-08-14T21:47:09.9030777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9031333Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9031825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9032292Z return func(*args, **kwargs) 2025-08-14T21:47:09.9032771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9033270Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9033917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:09.9034553Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:09.9034761Z 2025-08-14T21:47:09.9034868Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9035122Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9035414Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9035866Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9036264Z return mod(**inputs) 2025-08-14T21:47:09.9036713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9037204Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9037696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9038181Z outputs = block( 2025-08-14T21:47:09.9038587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9039066Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9039536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9040004Z return func(*args, **kwargs) 2025-08-14T21:47:09.9040468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:09.9040980Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:09.9041570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:09.9042059Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:09.9042494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:09.9043071Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:09.9043367Z 2025-08-14T21:47:09.9043472Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9043730Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9043981Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9044226Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9044472Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9044718Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9044994Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9045448Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9045863Z return mod(**inputs) 2025-08-14T21:47:09.9046315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9046810Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9047299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9047769Z outputs = block( 2025-08-14T21:47:09.9052639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9054409Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9054921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9055390Z return func(*args, **kwargs) 2025-08-14T21:47:09.9055848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9056348Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9056834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9057299Z return func(*args, **kwargs) 2025-08-14T21:47:09.9057759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9058273Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9058840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:09.9059441Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:09.9059686Z 2025-08-14T21:47:09.9059817Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9060267Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9060675Z return mod(**inputs) 2025-08-14T21:47:09.9061119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9061678Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9062164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9062771Z outputs = block( 2025-08-14T21:47:09.9063252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9063712Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9064196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9064657Z return func(*args, **kwargs) 2025-08-14T21:47:09.9065120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9065625Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9066106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9066576Z return func(*args, **kwargs) 2025-08-14T21:47:09.9067036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9067540Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9068095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:09.9068673Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:09.9068883Z 2025-08-14T21:47:09.9068980Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9069234Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9069515Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9069958Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9070364Z return mod(**inputs) 2025-08-14T21:47:09.9070810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9071306Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9071854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9072322Z outputs = block( 2025-08-14T21:47:09.9072715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9073171Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9073653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9074119Z return func(*args, **kwargs) 2025-08-14T21:47:09.9074579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:09.9075105Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:09.9075615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:09.9076102Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:09.9076553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:09.9081390Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:09.9081711Z 2025-08-14T21:47:09.9081827Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9082083Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9082339Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9082600Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9082870Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9083124Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9083416Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9083860Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9084303Z return mod(**inputs) 2025-08-14T21:47:09.9084768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9085269Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9085753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9086219Z outputs = block( 2025-08-14T21:47:09.9086622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9087075Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9087551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9088050Z return func(*args, **kwargs) 2025-08-14T21:47:09.9088540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9089038Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9089521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9089989Z return func(*args, **kwargs) 2025-08-14T21:47:09.9090448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9090957Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9091509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:09.9092254Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:09.9092488Z 2025-08-14T21:47:09.9092623Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9093064Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9093471Z return mod(**inputs) 2025-08-14T21:47:09.9093983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9094493Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9094976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9095440Z outputs = block( 2025-08-14T21:47:09.9095841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9096297Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9096778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9097246Z return func(*args, **kwargs) 2025-08-14T21:47:09.9097709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9098202Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9098695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9099159Z return func(*args, **kwargs) 2025-08-14T21:47:09.9099618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9100127Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9100690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:09.9101302Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:09.9101510Z 2025-08-14T21:47:09.9101609Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9101893Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9102187Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9102639Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9103042Z return mod(**inputs) 2025-08-14T21:47:09.9103497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9103997Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9104476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9104945Z outputs = block( 2025-08-14T21:47:09.9105349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9105804Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9127813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9128328Z return func(*args, **kwargs) 2025-08-14T21:47:09.9128832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:09.9129373Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:09.9129904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:09.9130403Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:09.9130846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:09.9131430Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:09.9131734Z 2025-08-14T21:47:09.9131851Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9132115Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9132370Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9132624Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9132985Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9133236Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9133528Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9134002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9134417Z return mod(**inputs) 2025-08-14T21:47:09.9134885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9135564Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9136133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9136604Z outputs = block( 2025-08-14T21:47:09.9137014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9137475Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9137953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9138428Z return func(*args, **kwargs) 2025-08-14T21:47:09.9138894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9139393Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9139917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9140424Z return func(*args, **kwargs) 2025-08-14T21:47:09.9140891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9141428Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9141997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:09.9142611Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:09.9142848Z 2025-08-14T21:47:09.9142994Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9143445Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9143859Z return mod(**inputs) 2025-08-14T21:47:09.9144372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9144879Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9145362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9145836Z outputs = block( 2025-08-14T21:47:09.9146244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9146695Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9147179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9147651Z return func(*args, **kwargs) 2025-08-14T21:47:09.9148118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9148611Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9149524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9154087Z return func(*args, **kwargs) 2025-08-14T21:47:09.9154543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9155058Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9155733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:09.9156325Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:09.9156534Z 2025-08-14T21:47:09.9156637Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9156915Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9157206Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9157653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9158074Z return mod(**inputs) 2025-08-14T21:47:09.9158537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9159037Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9159522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9159991Z outputs = block( 2025-08-14T21:47:09.9160399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9160858Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9161401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9161872Z return func(*args, **kwargs) 2025-08-14T21:47:09.9162337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:09.9162887Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:09.9163402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:09.9163926Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:09.9164457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:09.9165085Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:09.9165395Z 2025-08-14T21:47:09.9165497Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9165766Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9166013Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9166260Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9166513Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9166766Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9167047Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9167505Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9167916Z return mod(**inputs) 2025-08-14T21:47:09.9168363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9168868Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9169358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9169829Z outputs = block( 2025-08-14T21:47:09.9170225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9170681Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9171159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9171628Z return func(*args, **kwargs) 2025-08-14T21:47:09.9172090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9172594Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9173133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9173596Z return func(*args, **kwargs) 2025-08-14T21:47:09.9174063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9174570Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9175126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:09.9175742Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:09.9175990Z 2025-08-14T21:47:09.9176219Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9176670Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9177069Z return mod(**inputs) 2025-08-14T21:47:09.9177528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9178025Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9178516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9183253Z outputs = block( 2025-08-14T21:47:09.9183659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9184124Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9184639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9185141Z return func(*args, **kwargs) 2025-08-14T21:47:09.9185643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9186184Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9186663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9187130Z return func(*args, **kwargs) 2025-08-14T21:47:09.9187588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9188093Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9188646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:09.9189223Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:09.9189426Z 2025-08-14T21:47:09.9189531Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9189781Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9190076Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9190526Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9190933Z return mod(**inputs) 2025-08-14T21:47:09.9191376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9191868Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9192348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9192802Z outputs = block( 2025-08-14T21:47:09.9193274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9193780Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9194256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9194719Z return func(*args, **kwargs) 2025-08-14T21:47:09.9195234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:09.9195749Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:09.9196257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:09.9196742Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:09.9197184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:09.9197759Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:09.9198060Z 2025-08-14T21:47:09.9198164Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9198420Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9198680Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9198933Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9199175Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9199426Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9199709Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9200156Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9200563Z return mod(**inputs) 2025-08-14T21:47:09.9201019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9201602Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9202144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9202611Z outputs = block( 2025-08-14T21:47:09.9203013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9203483Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9203970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9204442Z return func(*args, **kwargs) 2025-08-14T21:47:09.9204898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9205397Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9205888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9206358Z return func(*args, **kwargs) 2025-08-14T21:47:09.9206806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9207312Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9212115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:09.9212789Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:09.9213024Z 2025-08-14T21:47:09.9213160Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9213629Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9214044Z return mod(**inputs) 2025-08-14T21:47:09.9214492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9214989Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9215477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9215944Z outputs = block( 2025-08-14T21:47:09.9216337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9216792Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9217321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9217783Z return func(*args, **kwargs) 2025-08-14T21:47:09.9218234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9218734Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9219212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9219685Z return func(*args, **kwargs) 2025-08-14T21:47:09.9220143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9220649Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9221201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:09.9221773Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:09.9221976Z 2025-08-14T21:47:09.9222080Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9222401Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9222741Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9223189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9223589Z return mod(**inputs) 2025-08-14T21:47:09.9224059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9224550Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9225057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9225510Z outputs = block( 2025-08-14T21:47:09.9225911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9226361Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9226832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9227288Z return func(*args, **kwargs) 2025-08-14T21:47:09.9227745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:09.9228258Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:09.9228759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:09.9229244Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:09.9229679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:09.9230245Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:09.9230543Z 2025-08-14T21:47:09.9230641Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9230896Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9231140Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9231389Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9231624Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9231867Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9232149Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9232588Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9232990Z return mod(**inputs) 2025-08-14T21:47:09.9233440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9233932Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9234465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9234930Z outputs = block( 2025-08-14T21:47:09.9235331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9235776Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9236254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9245133Z return func(*args, **kwargs) 2025-08-14T21:47:09.9245723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9246395Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9247044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9247624Z return func(*args, **kwargs) 2025-08-14T21:47:09.9248078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9248592Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9249511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:09.9250133Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:09.9250431Z 2025-08-14T21:47:09.9250561Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9251008Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9251535Z return mod(**inputs) 2025-08-14T21:47:09.9252032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9252528Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9253015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9253479Z outputs = block( 2025-08-14T21:47:09.9253872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9254325Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9254802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9255268Z return func(*args, **kwargs) 2025-08-14T21:47:09.9255717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9256216Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9256704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9257163Z return func(*args, **kwargs) 2025-08-14T21:47:09.9257619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9258121Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9258674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:09.9259237Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:09.9259453Z 2025-08-14T21:47:09.9259556Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9259809Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9260089Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9260541Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9260945Z return mod(**inputs) 2025-08-14T21:47:09.9261467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9261954Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9262437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9262903Z outputs = block( 2025-08-14T21:47:09.9263293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9263746Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9264223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9264689Z return func(*args, **kwargs) 2025-08-14T21:47:09.9265139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:09.9265750Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:09.9266324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:09.9266809Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:09.9267242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:09.9267819Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:09.9268149Z 2025-08-14T21:47:09.9268256Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9268509Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9268761Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9269039Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9269283Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9269521Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9269806Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9270306Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9270704Z return mod(**inputs) 2025-08-14T21:47:09.9271155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9271653Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9272128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9272596Z outputs = block( 2025-08-14T21:47:09.9272993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9273449Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9273923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9274396Z return func(*args, **kwargs) 2025-08-14T21:47:09.9274852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9275346Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9275825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9276290Z return func(*args, **kwargs) 2025-08-14T21:47:09.9276746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9277249Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9277808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:09.9278409Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:09.9278703Z 2025-08-14T21:47:09.9278842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9279281Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9279681Z return mod(**inputs) 2025-08-14T21:47:09.9286359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9286855Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9287346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9287819Z outputs = block( 2025-08-14T21:47:09.9288224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9288677Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9289155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9289619Z return func(*args, **kwargs) 2025-08-14T21:47:09.9290078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9290567Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9291056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9291520Z return func(*args, **kwargs) 2025-08-14T21:47:09.9292039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9292544Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9293123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:09.9293697Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:09.9293903Z 2025-08-14T21:47:09.9294001Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9294253Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9294539Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9295052Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9295501Z return mod(**inputs) 2025-08-14T21:47:09.9295950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9296448Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9296925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9297385Z outputs = block( 2025-08-14T21:47:09.9297786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9298230Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9298704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9299169Z return func(*args, **kwargs) 2025-08-14T21:47:09.9299670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:09.9300180Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:09.9300697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:09.9301182Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:09.9301625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:09.9302231Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:09.9302539Z 2025-08-14T21:47:09.9302639Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9302900Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9303143Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9303410Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9303684Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9303924Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9304202Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9304657Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9305061Z return mod(**inputs) 2025-08-14T21:47:09.9305507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9306004Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9306494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9306949Z outputs = block( 2025-08-14T21:47:09.9307348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9307800Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9308273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9308732Z return func(*args, **kwargs) 2025-08-14T21:47:09.9313457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9313970Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9314488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9314954Z return func(*args, **kwargs) 2025-08-14T21:47:09.9315418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9315929Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9316482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:09.9317095Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:09.9317338Z 2025-08-14T21:47:09.9317469Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9317918Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9318319Z return mod(**inputs) 2025-08-14T21:47:09.9318779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9319280Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9319762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9320233Z outputs = block( 2025-08-14T21:47:09.9320635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9321083Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9321623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9322094Z return func(*args, **kwargs) 2025-08-14T21:47:09.9322556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9323062Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9323541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9324157Z return func(*args, **kwargs) 2025-08-14T21:47:09.9324649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9325150Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9325707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:09.9326278Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:09.9326487Z 2025-08-14T21:47:09.9326592Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9326839Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9327124Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9327571Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9327965Z return mod(**inputs) 2025-08-14T21:47:09.9328417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9328912Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9329394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9329847Z outputs = block( 2025-08-14T21:47:09.9330243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9330692Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9331186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9331653Z return func(*args, **kwargs) 2025-08-14T21:47:09.9332137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:09.9332654Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:09.9333163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:09.9333653Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:09.9334097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:09.9334668Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:09.9334966Z 2025-08-14T21:47:09.9335067Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9335326Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9335576Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9335816Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9336065Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9336314Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9336588Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9337046Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9337452Z return mod(**inputs) 2025-08-14T21:47:09.9337906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9342673Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9343222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9343696Z outputs = block( 2025-08-14T21:47:09.9344092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9344557Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9345041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9345562Z return func(*args, **kwargs) 2025-08-14T21:47:09.9346020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9346523Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9347008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9347480Z return func(*args, **kwargs) 2025-08-14T21:47:09.9347937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9348449Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9349362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:09.9349977Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:09.9350231Z 2025-08-14T21:47:09.9350370Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9350824Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9351232Z return mod(**inputs) 2025-08-14T21:47:09.9351678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9352184Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9352741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9353326Z outputs = block( 2025-08-14T21:47:09.9353717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9354202Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9354684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9355146Z return func(*args, **kwargs) 2025-08-14T21:47:09.9355612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9356106Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9356591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9357050Z return func(*args, **kwargs) 2025-08-14T21:47:09.9357505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9358011Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9358560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:09.9359136Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:09.9359352Z 2025-08-14T21:47:09.9359452Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9359705Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9359984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9360432Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9360833Z return mod(**inputs) 2025-08-14T21:47:09.9361356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9361860Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9362343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9362808Z outputs = block( 2025-08-14T21:47:09.9363197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9363721Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9364203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9364670Z return func(*args, **kwargs) 2025-08-14T21:47:09.9365133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:09.9365650Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:09.9366162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:09.9366654Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:09.9367092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:09.9371878Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:09.9372174Z 2025-08-14T21:47:09.9372290Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9372543Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9372798Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9373041Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9373276Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9373535Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9373829Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9374272Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9374701Z return mod(**inputs) 2025-08-14T21:47:09.9375152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9375674Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9376158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9376626Z outputs = block( 2025-08-14T21:47:09.9377022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9377471Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9377936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9378401Z return func(*args, **kwargs) 2025-08-14T21:47:09.9378863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9379350Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9379833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9380300Z return func(*args, **kwargs) 2025-08-14T21:47:09.9380766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9381267Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9381896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:09.9382553Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:09.9382790Z 2025-08-14T21:47:09.9382924Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9383371Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9383772Z return mod(**inputs) 2025-08-14T21:47:09.9384219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9384709Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9385243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9385705Z outputs = block( 2025-08-14T21:47:09.9386104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9386550Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9387026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9387495Z return func(*args, **kwargs) 2025-08-14T21:47:09.9387952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:09.9388449Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:09.9388940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9389405Z return func(*args, **kwargs) 2025-08-14T21:47:09.9389868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:09.9390374Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:09.9390934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:09.9391504Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:09.9391714Z 2025-08-14T21:47:09.9391813Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9392103Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9392396Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9392837Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9393270Z return mod(**inputs) 2025-08-14T21:47:09.9393732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:09.9394224Z transformer_outputs = self.transformer( 2025-08-14T21:47:09.9394715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:09.9395186Z outputs = block( 2025-08-14T21:47:09.9395584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:09.9396031Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:09.9405050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:09.9405682Z return func(*args, **kwargs) 2025-08-14T21:47:09.9406283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:09.9406982Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:09.9407679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:09.9408208Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:09.9408647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:09.9409216Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:09.9409516Z 2025-08-14T21:47:09.9409615Z cudagraph partition due to non gpu ops 2025-08-14T21:47:09.9409908Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9410345Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9412881Z return mod(**inputs) 2025-08-14T21:47:09.9413338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1494, in forward 2025-08-14T21:47:09.9413813Z logits = self.score(hidden_states) 2025-08-14T21:47:09.9414033Z 2025-08-14T21:47:09.9414164Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9414611Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9415021Z return mod(**inputs) 2025-08-14T21:47:09.9415461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1537, in forward 2025-08-14T21:47:09.9416029Z loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:47:09.9416282Z 2025-08-14T21:47:09.9416419Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:09.9416861Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:09.9417257Z return mod(**inputs) 2025-08-14T21:47:09.9417705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1537, in forward 2025-08-14T21:47:09.9418278Z loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:47:09.9418527Z 2025-08-14T21:47:26.1863987Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1864362Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1864666Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1864915Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1865183Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1865432Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1865891Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1866151Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1866403Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1866644Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1866964Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1867211Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1867505Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.1867983Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.1868418Z return mod(**inputs) 2025-08-14T21:47:26.1873170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1509, in forward 2025-08-14T21:47:26.1873732Z last_non_pad_token = (token_indices * non_pad_mask).argmax(-1) 2025-08-14T21:47:26.1873966Z 2025-08-14T21:47:26.1874071Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1874336Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1874616Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1874869Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1875121Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1875443Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.1875916Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.1876335Z return mod(**inputs) 2025-08-14T21:47:26.1876805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.1877315Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.1877817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.1878298Z outputs = block( 2025-08-14T21:47:26.1878716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.1879178Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.1879660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1880141Z return func(*args, **kwargs) 2025-08-14T21:47:26.1882173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.1882740Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.1883229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1883851Z return func(*args, **kwargs) 2025-08-14T21:47:26.1884328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.1884841Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.1885424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:26.1886039Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:26.1886280Z 2025-08-14T21:47:26.1886425Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.1886883Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.1887292Z return mod(**inputs) 2025-08-14T21:47:26.1887756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.1888255Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.1888741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.1889207Z outputs = block( 2025-08-14T21:47:26.1889617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.1890109Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.1890590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1891099Z return func(*args, **kwargs) 2025-08-14T21:47:26.1891566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.1892055Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.1892548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1893022Z return func(*args, **kwargs) 2025-08-14T21:47:26.1893479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.1893998Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.1894561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:26.1895144Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:26.1895353Z 2025-08-14T21:47:26.1895454Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1895714Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1896021Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.1896479Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.1896886Z return mod(**inputs) 2025-08-14T21:47:26.1897345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.1902056Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.1902595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.1903074Z outputs = block( 2025-08-14T21:47:26.1903487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.1903950Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.1904498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1904970Z return func(*args, **kwargs) 2025-08-14T21:47:26.1905437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:26.1905947Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:26.1906463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:26.1906956Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:26.1907407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:26.1907989Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:26.1908308Z 2025-08-14T21:47:26.1908410Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1908676Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1908922Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1909178Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1909426Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1909672Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1910086Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.1910545Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.1910953Z return mod(**inputs) 2025-08-14T21:47:26.1911401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.1911932Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.1912502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.1913066Z outputs = block( 2025-08-14T21:47:26.1913466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.1913926Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.1914403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1914860Z return func(*args, **kwargs) 2025-08-14T21:47:26.1915322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.1915818Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.1916305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1916757Z return func(*args, **kwargs) 2025-08-14T21:47:26.1917217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.1917720Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.1918280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:26.1918878Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:26.1919118Z 2025-08-14T21:47:26.1919249Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.1919701Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.1920103Z return mod(**inputs) 2025-08-14T21:47:26.1920564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.1921055Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.1921631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.1922091Z outputs = block( 2025-08-14T21:47:26.1922550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.1923001Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.1923475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1923947Z return func(*args, **kwargs) 2025-08-14T21:47:26.1924410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.1924909Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.1925388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1925861Z return func(*args, **kwargs) 2025-08-14T21:47:26.1926322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.1931070Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.1931678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:26.1932260Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:26.1932466Z 2025-08-14T21:47:26.1932575Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1932831Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1933125Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.1933622Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.1934045Z return mod(**inputs) 2025-08-14T21:47:26.1934492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.1935061Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.1935558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.1936015Z outputs = block( 2025-08-14T21:47:26.1936415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.1936870Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.1937345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1937808Z return func(*args, **kwargs) 2025-08-14T21:47:26.1938266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:26.1938786Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:26.1939292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:26.1939780Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:26.1951711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:26.1952326Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:26.1952641Z 2025-08-14T21:47:26.1952749Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1953018Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1953277Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1953533Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1953791Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1954045Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1954328Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.1954803Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.1955220Z return mod(**inputs) 2025-08-14T21:47:26.1960201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.1960779Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.1961365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.1961851Z outputs = block( 2025-08-14T21:47:26.1962257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.1962727Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.1963218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1963698Z return func(*args, **kwargs) 2025-08-14T21:47:26.1964161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.1964685Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.1965186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1965654Z return func(*args, **kwargs) 2025-08-14T21:47:26.1966124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.1966635Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.1967205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:26.1967858Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:26.1968106Z 2025-08-14T21:47:26.1968290Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.1968745Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.1969161Z return mod(**inputs) 2025-08-14T21:47:26.1969611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.1970118Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.1970713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.1971207Z outputs = block( 2025-08-14T21:47:26.1971616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.1972082Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.1972567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1973033Z return func(*args, **kwargs) 2025-08-14T21:47:26.1973499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.1974011Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.1974494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1974972Z return func(*args, **kwargs) 2025-08-14T21:47:26.1975434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.1975948Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.1976509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:26.1977092Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:26.1977303Z 2025-08-14T21:47:26.1977414Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1977672Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1978013Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.1978473Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.1978886Z return mod(**inputs) 2025-08-14T21:47:26.1979337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.1979843Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.1980341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.1980812Z outputs = block( 2025-08-14T21:47:26.1981210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.1981679Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.1982157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.1982632Z return func(*args, **kwargs) 2025-08-14T21:47:26.1983101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:26.1983622Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:26.1984135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:26.1984621Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:26.1993475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:26.1994297Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:26.1994721Z 2025-08-14T21:47:26.1994842Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1995142Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1995445Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1995749Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1996035Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1996284Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.1996577Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.1997028Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.1997442Z return mod(**inputs) 2025-08-14T21:47:26.1997900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.1998406Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.1998891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2001484Z outputs = block( 2025-08-14T21:47:26.2001945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2002400Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2002879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2003352Z return func(*args, **kwargs) 2025-08-14T21:47:26.2003813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2004305Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2004792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2005274Z return func(*args, **kwargs) 2025-08-14T21:47:26.2005733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2006258Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2006875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:26.2007492Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:26.2007731Z 2025-08-14T21:47:26.2007866Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2008318Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2008731Z return mod(**inputs) 2025-08-14T21:47:26.2009189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2009685Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2010176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2010651Z outputs = block( 2025-08-14T21:47:26.2011051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2011517Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2011999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2012477Z return func(*args, **kwargs) 2025-08-14T21:47:26.2012932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2013434Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2014033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2014552Z return func(*args, **kwargs) 2025-08-14T21:47:26.2015044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2015556Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2016128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:26.2016727Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:26.2016935Z 2025-08-14T21:47:26.2017045Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2017298Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2017590Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2018045Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2018449Z return mod(**inputs) 2025-08-14T21:47:26.2018903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2019406Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2019897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2020356Z outputs = block( 2025-08-14T21:47:26.2020761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2021218Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2021690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2022161Z return func(*args, **kwargs) 2025-08-14T21:47:26.2022624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:26.2023141Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:26.2023642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:26.2024129Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:26.2024620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:26.2025197Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:26.2025492Z 2025-08-14T21:47:26.2025590Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2025843Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2026096Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2026337Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2026584Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2026830Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2027103Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2027560Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2027974Z return mod(**inputs) 2025-08-14T21:47:26.2032760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2033308Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2033800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2034274Z outputs = block( 2025-08-14T21:47:26.2034667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2035125Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2035669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2036136Z return func(*args, **kwargs) 2025-08-14T21:47:26.2036614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2037116Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2037605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2038072Z return func(*args, **kwargs) 2025-08-14T21:47:26.2038528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2039041Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2039605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:26.2040210Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:26.2040457Z 2025-08-14T21:47:26.2040588Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2041045Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2041512Z return mod(**inputs) 2025-08-14T21:47:26.2041964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2042462Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2043017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2043534Z outputs = block( 2025-08-14T21:47:26.2043946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2044403Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2044882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2045338Z return func(*args, **kwargs) 2025-08-14T21:47:26.2045800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2046359Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2046848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2047308Z return func(*args, **kwargs) 2025-08-14T21:47:26.2047766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2048269Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2049180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:26.2049774Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:26.2049989Z 2025-08-14T21:47:26.2050087Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2050346Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2050654Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2051123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2051531Z return mod(**inputs) 2025-08-14T21:47:26.2051990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2052486Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2052971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2053430Z outputs = block( 2025-08-14T21:47:26.2053930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2054387Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2054893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2055364Z return func(*args, **kwargs) 2025-08-14T21:47:26.2055827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:26.2056349Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:26.2056860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:26.2061518Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:26.2062012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:26.2062589Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:26.2062885Z 2025-08-14T21:47:26.2062984Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2063243Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2063496Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2063738Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2063990Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2064245Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2064527Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2064981Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2065383Z return mod(**inputs) 2025-08-14T21:47:26.2065836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2066331Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2066822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2067291Z outputs = block( 2025-08-14T21:47:26.2067685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2068230Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2068718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2069185Z return func(*args, **kwargs) 2025-08-14T21:47:26.2069639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2070146Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2070631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2071105Z return func(*args, **kwargs) 2025-08-14T21:47:26.2071561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2072144Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2072766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:26.2073368Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:26.2073614Z 2025-08-14T21:47:26.2073746Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2074197Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2074610Z return mod(**inputs) 2025-08-14T21:47:26.2075056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2075587Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2076071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2076553Z outputs = block( 2025-08-14T21:47:26.2076956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2077418Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2077895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2078353Z return func(*args, **kwargs) 2025-08-14T21:47:26.2078817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2079324Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2079811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2080279Z return func(*args, **kwargs) 2025-08-14T21:47:26.2080737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2081326Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2081897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:26.2082478Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:26.2082692Z 2025-08-14T21:47:26.2082790Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2083047Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2083331Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2083784Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2084200Z return mod(**inputs) 2025-08-14T21:47:26.2084646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2085153Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2085651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2086179Z outputs = block( 2025-08-14T21:47:26.2090802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2091315Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2091805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2092264Z return func(*args, **kwargs) 2025-08-14T21:47:26.2092729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:26.2093259Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:26.2093773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:26.2094263Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:26.2094707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:26.2095285Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:26.2095588Z 2025-08-14T21:47:26.2095698Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2095953Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2096201Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2096452Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2096694Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2096977Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2097259Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2097702Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2098132Z return mod(**inputs) 2025-08-14T21:47:26.2098587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2099086Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2099568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2100037Z outputs = block( 2025-08-14T21:47:26.2100447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2100967Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2101508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2101979Z return func(*args, **kwargs) 2025-08-14T21:47:26.2102439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2102933Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2103426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2103890Z return func(*args, **kwargs) 2025-08-14T21:47:26.2104349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2104847Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2105404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:26.2106015Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:26.2106253Z 2025-08-14T21:47:26.2106383Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2106827Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2107232Z return mod(**inputs) 2025-08-14T21:47:26.2107757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2108246Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2108733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2109197Z outputs = block( 2025-08-14T21:47:26.2109587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2110033Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2110508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2110971Z return func(*args, **kwargs) 2025-08-14T21:47:26.2111425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2111925Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2112415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2112878Z return func(*args, **kwargs) 2025-08-14T21:47:26.2113331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2113837Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2114403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:26.2115007Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:26.2115219Z 2025-08-14T21:47:26.2123733Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2124042Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2124429Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2125020Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2125549Z return mod(**inputs) 2025-08-14T21:47:26.2126148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2126646Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2127134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2127599Z outputs = block( 2025-08-14T21:47:26.2127999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2128446Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2128928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2129400Z return func(*args, **kwargs) 2025-08-14T21:47:26.2129927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:26.2130505Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:26.2131016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:26.2131501Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:26.2131939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:26.2132516Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:26.2132822Z 2025-08-14T21:47:26.2132921Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2133178Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2133423Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2133672Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2133919Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2134204Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2134493Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2134943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2135340Z return mod(**inputs) 2025-08-14T21:47:26.2135792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2136281Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2136764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2137223Z outputs = block( 2025-08-14T21:47:26.2137622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2138077Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2138550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2139018Z return func(*args, **kwargs) 2025-08-14T21:47:26.2139479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2139974Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2140451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2140935Z return func(*args, **kwargs) 2025-08-14T21:47:26.2141392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2141898Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2142476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:26.2143087Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:26.2143324Z 2025-08-14T21:47:26.2143461Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2143904Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2144385Z return mod(**inputs) 2025-08-14T21:47:26.2144890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2145395Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2145881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2146351Z outputs = block( 2025-08-14T21:47:26.2146765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2147209Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2147697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2148171Z return func(*args, **kwargs) 2025-08-14T21:47:26.2148631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2149449Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2149936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2150414Z return func(*args, **kwargs) 2025-08-14T21:47:26.2150884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2151390Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2152077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:26.2152662Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:26.2152866Z 2025-08-14T21:47:26.2153037Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2153410Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2153706Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2154154Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2154554Z return mod(**inputs) 2025-08-14T21:47:26.2155016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2155512Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2156001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2156465Z outputs = block( 2025-08-14T21:47:26.2156871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2157327Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2157797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2158266Z return func(*args, **kwargs) 2025-08-14T21:47:26.2158721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:26.2165547Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:26.2166066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:26.2166599Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:26.2167045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:26.2167614Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:26.2167922Z 2025-08-14T21:47:26.2168022Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2168276Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2168528Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2168772Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2169016Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2169264Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2169541Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2169990Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2170400Z return mod(**inputs) 2025-08-14T21:47:26.2170848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2171345Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2171838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2172311Z outputs = block( 2025-08-14T21:47:26.2172706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2173160Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2173743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2174225Z return func(*args, **kwargs) 2025-08-14T21:47:26.2174694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2175194Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2175692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2176204Z return func(*args, **kwargs) 2025-08-14T21:47:26.2176668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2177178Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2177742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:26.2178343Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:26.2178591Z 2025-08-14T21:47:26.2178723Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2179173Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2179576Z return mod(**inputs) 2025-08-14T21:47:26.2180034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2180540Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2181034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2181500Z outputs = block( 2025-08-14T21:47:26.2181903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2182355Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2182821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2183331Z return func(*args, **kwargs) 2025-08-14T21:47:26.2183787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2184310Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2184793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2185257Z return func(*args, **kwargs) 2025-08-14T21:47:26.2185713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2186224Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2186777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:26.2187356Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:26.2187566Z 2025-08-14T21:47:26.2187674Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2192168Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2192507Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2192957Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2193365Z return mod(**inputs) 2025-08-14T21:47:26.2193809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2194301Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2194784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2195237Z outputs = block( 2025-08-14T21:47:26.2195636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2196093Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2196570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2197034Z return func(*args, **kwargs) 2025-08-14T21:47:26.2197498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:26.2198094Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:26.2198601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:26.2199087Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:26.2199532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:26.2200105Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:26.2200407Z 2025-08-14T21:47:26.2200508Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2200767Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2201024Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2201336Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2201593Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2201840Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2202125Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2202647Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2203110Z return mod(**inputs) 2025-08-14T21:47:26.2203565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2204051Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2204544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2205045Z outputs = block( 2025-08-14T21:47:26.2205455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2205930Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2206406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2206876Z return func(*args, **kwargs) 2025-08-14T21:47:26.2207330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2207828Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2208319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2208789Z return func(*args, **kwargs) 2025-08-14T21:47:26.2209253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2209761Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2210324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:26.2210928Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:26.2211161Z 2025-08-14T21:47:26.2211295Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2211745Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2212151Z return mod(**inputs) 2025-08-14T21:47:26.2212595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2213087Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2213576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2214046Z outputs = block( 2025-08-14T21:47:26.2214441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2214895Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2215423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2215888Z return func(*args, **kwargs) 2025-08-14T21:47:26.2216341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2221077Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2221615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2222081Z return func(*args, **kwargs) 2025-08-14T21:47:26.2222550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2223059Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2223624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:26.2224193Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:26.2224405Z 2025-08-14T21:47:26.2224504Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2224763Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2225043Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2225493Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2225898Z return mod(**inputs) 2025-08-14T21:47:26.2226350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2226867Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2227354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2227845Z outputs = block( 2025-08-14T21:47:26.2228242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2228697Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2229176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2229649Z return func(*args, **kwargs) 2025-08-14T21:47:26.2230101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:26.2230625Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:26.2231146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:26.2231723Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:26.2232181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:26.2232754Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:26.2233052Z 2025-08-14T21:47:26.2233155Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2233403Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2233653Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2233906Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2234158Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2234396Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2234680Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2235132Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2235528Z return mod(**inputs) 2025-08-14T21:47:26.2235980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2236476Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2237001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2237463Z outputs = block( 2025-08-14T21:47:26.2237861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2238312Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2238779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2239241Z return func(*args, **kwargs) 2025-08-14T21:47:26.2239702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2240202Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2240682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2241145Z return func(*args, **kwargs) 2025-08-14T21:47:26.2241698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2242203Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2242770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:26.2243379Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:26.2243613Z 2025-08-14T21:47:26.2243783Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2244230Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2244640Z return mod(**inputs) 2025-08-14T21:47:26.2245117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2245609Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2250720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2251199Z outputs = block( 2025-08-14T21:47:26.2251601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2252061Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2252540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2253015Z return func(*args, **kwargs) 2025-08-14T21:47:26.2253479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2253982Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2254474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2254942Z return func(*args, **kwargs) 2025-08-14T21:47:26.2255398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2255918Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2256484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:26.2257077Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:26.2257286Z 2025-08-14T21:47:26.2257383Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2257638Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2257922Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2258364Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2258773Z return mod(**inputs) 2025-08-14T21:47:26.2259360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2259863Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2260424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2260953Z outputs = block( 2025-08-14T21:47:26.2261357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2261798Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2262281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2262751Z return func(*args, **kwargs) 2025-08-14T21:47:26.2263216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:26.2263733Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:26.2264246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:26.2264736Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:26.2265182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:26.2265749Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:26.2266056Z 2025-08-14T21:47:26.2266200Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2266460Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2266699Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2266947Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2267249Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2267553Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2267834Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2268289Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2268694Z return mod(**inputs) 2025-08-14T21:47:26.2269146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2269634Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2270122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2270596Z outputs = block( 2025-08-14T21:47:26.2270991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2271453Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2271934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2272411Z return func(*args, **kwargs) 2025-08-14T21:47:26.2272866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2273365Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2273855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2274316Z return func(*args, **kwargs) 2025-08-14T21:47:26.2283098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2283811Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2284569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:26.2285354Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:26.2285602Z 2025-08-14T21:47:26.2285789Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2286254Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2286679Z return mod(**inputs) 2025-08-14T21:47:26.2287130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2287629Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2288119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2288587Z outputs = block( 2025-08-14T21:47:26.2288978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2291605Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2292087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2292548Z return func(*args, **kwargs) 2025-08-14T21:47:26.2293006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:26.2293500Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:26.2293986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2294440Z return func(*args, **kwargs) 2025-08-14T21:47:26.2294896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:26.2295434Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:26.2295990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:26.2296587Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:26.2296804Z 2025-08-14T21:47:26.2296904Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2297163Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2297443Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2297889Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2298296Z return mod(**inputs) 2025-08-14T21:47:26.2298739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:26.2299242Z transformer_outputs = self.transformer( 2025-08-14T21:47:26.2299734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:26.2300204Z outputs = block( 2025-08-14T21:47:26.2300599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:26.2301055Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:26.2301527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:26.2301998Z return func(*args, **kwargs) 2025-08-14T21:47:26.2302449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:26.2302969Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:26.2303481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:26.2304047Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:26.2304539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:26.2305119Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:26.2305415Z 2025-08-14T21:47:26.2305579Z cudagraph partition due to non gpu ops 2025-08-14T21:47:26.2305868Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2306323Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2306732Z return mod(**inputs) 2025-08-14T21:47:26.2307179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1494, in forward 2025-08-14T21:47:26.2307669Z logits = self.score(hidden_states) 2025-08-14T21:47:26.2307843Z 2025-08-14T21:47:26.2307978Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2308435Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2308847Z return mod(**inputs) 2025-08-14T21:47:26.2309303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1537, in forward 2025-08-14T21:47:26.2309894Z loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:47:26.2310150Z 2025-08-14T21:47:26.2310299Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:26.2310750Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:26.2311162Z return mod(**inputs) 2025-08-14T21:47:26.2311610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1537, in forward 2025-08-14T21:47:26.2312194Z loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:47:26.2312455Z 2025-08-14T21:47:29.3846614Z Compilation time (from dynamo_timed): 30.430898536 2025-08-14T21:47:29.3848234Z pass 2025-08-14T21:47:29.3850320Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:29.3851657Z TIMING: _recursive_pre_grad_passes:0.12514 _recursive_joint_graph_passes:0.98826 _recursive_post_grad_passes:0.21065 async_compile.wait:0.90672 code_gen:9.51629 inductor_compile:15.41416 backend_compile:24.947 gc:0.00085 entire_frame_compile:30.4309 total_wall_time:30.4309 2025-08-14T21:47:29.3852805Z STATS: call_* op count: 1138 | FakeTensorMode.__torch_dispatch__:42150 | FakeTensor.__torch_dispatch__:7924 | ProxyTorchDispatchMode.__torch_dispatch__:8335 2025-08-14T21:47:29.3853436Z Dynamo produced 2 graphs covering 1138 ops with 0 graph breaks (0 unique) 2025-08-14T21:47:36.4512267Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:47:36.4513399Z from pkg_resources import resource_filename 2025-08-14T21:47:37.3867345Z 2025-08-14T21:47:39.1781675Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:47:39.1782163Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:47:39.1795972Z cpu eval GoogleFnet 2025-08-14T21:47:39.8424183Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:40.1613545Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:40.4702538Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:50.8613867Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8614696Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8615352Z return mod(**inputs) 2025-08-14T21:47:50.8615980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8628789Z outputs = self.fnet( 2025-08-14T21:47:50.8636409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8637094Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8637588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8638085Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8638554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8639028Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8639773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8640397Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8641030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8641672Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8642166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8642931Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8643221Z 2025-08-14T21:47:50.8643422Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8644013Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8644701Z return mod(**inputs) 2025-08-14T21:47:50.8645153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8645630Z outputs = self.fnet( 2025-08-14T21:47:50.8646128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8646607Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8647082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8647574Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8648034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8648498Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8649556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8650292Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8650936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8651539Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8652167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8652811Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8653088Z 2025-08-14T21:47:50.8653249Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8653711Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8654122Z return mod(**inputs) 2025-08-14T21:47:50.8654574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8655237Z outputs = self.fnet( 2025-08-14T21:47:50.8655679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8656153Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8656763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8657588Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8658254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8662924Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8663411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8663923Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8664442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8664928Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8665417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8665931Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8666133Z 2025-08-14T21:47:50.8666268Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8666718Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8667129Z return mod(**inputs) 2025-08-14T21:47:50.8667571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8668030Z outputs = self.fnet( 2025-08-14T21:47:50.8668467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8669032Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8669497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8670039Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8670497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8670954Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8671426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8671936Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8672437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8672926Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8673477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8674045Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8674249Z 2025-08-14T21:47:50.8674392Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8674833Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8675247Z return mod(**inputs) 2025-08-14T21:47:50.8675687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8676170Z outputs = self.fnet( 2025-08-14T21:47:50.8676596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8677081Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8677548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8678090Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8678552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8679014Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8679562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8680066Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8680570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8681049Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8681619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8682124Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8682335Z 2025-08-14T21:47:50.8682466Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8682923Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8683327Z return mod(**inputs) 2025-08-14T21:47:50.8683768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8684239Z outputs = self.fnet( 2025-08-14T21:47:50.8684678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8685148Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8685615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8686117Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8686566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8687045Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8687522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8692263Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8692759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8693254Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8693739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8694241Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8694451Z 2025-08-14T21:47:50.8694588Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8695043Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8695438Z return mod(**inputs) 2025-08-14T21:47:50.8695874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8696345Z outputs = self.fnet( 2025-08-14T21:47:50.8696779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8697253Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8697711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8698197Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8698645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8699090Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8699569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8700072Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8700560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8701035Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8701583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8702093Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8702359Z 2025-08-14T21:47:50.8702490Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8702987Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8703385Z return mod(**inputs) 2025-08-14T21:47:50.8703825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8704285Z outputs = self.fnet( 2025-08-14T21:47:50.8704715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8705189Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8705650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8706136Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8706599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8707084Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8707558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8708104Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8708605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8709104Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8709579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8710089Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8710287Z 2025-08-14T21:47:50.8710424Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8710877Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8711330Z return mod(**inputs) 2025-08-14T21:47:50.8711772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8712228Z outputs = self.fnet( 2025-08-14T21:47:50.8712666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8713146Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8713611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8714092Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8714552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8715009Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8715483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8715987Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8716485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8721283Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8721783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8722297Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8722498Z 2025-08-14T21:47:50.8722628Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8723135Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8723534Z return mod(**inputs) 2025-08-14T21:47:50.8723974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8724442Z outputs = self.fnet( 2025-08-14T21:47:50.8724864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8725341Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8725804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8726289Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8726741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8727193Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8727668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8728187Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8728682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8729165Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8729639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8730179Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8730379Z 2025-08-14T21:47:50.8730518Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8730984Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8731454Z return mod(**inputs) 2025-08-14T21:47:50.8731950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8732418Z outputs = self.fnet( 2025-08-14T21:47:50.8732840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8733312Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8733777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8734275Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8734719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8735173Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8735650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8736151Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8736655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8737132Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8737603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8738098Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8738304Z 2025-08-14T21:47:50.8738435Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8738874Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8739274Z return mod(**inputs) 2025-08-14T21:47:50.8739702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8740163Z outputs = self.fnet( 2025-08-14T21:47:50.8740659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8741128Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8741590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8742086Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8742541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8742991Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8743465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8743971Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8744464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8744951Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8745430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8754688Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8754941Z 2025-08-14T21:47:50.8755055Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8755408Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8756119Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8756636Z return mod(**inputs) 2025-08-14T21:47:50.8757212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8757812Z outputs = self.fnet( 2025-08-14T21:47:50.8758263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 512, in forward 2025-08-14T21:47:50.8758741Z embedding_output = self.embeddings( 2025-08-14T21:47:50.8759224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 142, in forward 2025-08-14T21:47:50.8759716Z embeddings = self.projection(embeddings) 2025-08-14T21:47:50.8759893Z 2025-08-14T21:47:50.8759999Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8762365Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8762819Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8763232Z return mod(**inputs) 2025-08-14T21:47:50.8763668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8764143Z outputs = self.fnet( 2025-08-14T21:47:50.8764584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8765066Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8765524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8766014Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8766469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8766914Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8767392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8767892Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8768393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8768867Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8769431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8769944Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8770138Z 2025-08-14T21:47:50.8770277Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8770710Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8771117Z return mod(**inputs) 2025-08-14T21:47:50.8771554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8772011Z outputs = self.fnet( 2025-08-14T21:47:50.8772446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8772921Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8773388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8773870Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8774327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8774857Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8775387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8775934Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8776442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8776960Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8777433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8777946Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8778151Z 2025-08-14T21:47:50.8778286Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8778729Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8779124Z return mod(**inputs) 2025-08-14T21:47:50.8779610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8780080Z outputs = self.fnet( 2025-08-14T21:47:50.8780506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8780985Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8781454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8781944Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8782398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8782849Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8783326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8783823Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8784319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8784804Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8785276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8785784Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8785988Z 2025-08-14T21:47:50.8786120Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8786663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8787064Z return mod(**inputs) 2025-08-14T21:47:50.8787491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8787954Z outputs = self.fnet( 2025-08-14T21:47:50.8788387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8788851Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8793567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8794062Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8794528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8794967Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8795447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8795954Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8796441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8796920Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8797403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8797942Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8798141Z 2025-08-14T21:47:50.8798244Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8798557Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8799002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8799408Z return mod(**inputs) 2025-08-14T21:47:50.8799839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8800304Z outputs = self.fnet( 2025-08-14T21:47:50.8800738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8801204Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8801735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8802226Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8802685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8803129Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8803611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:50.8804218Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:50.8804718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:50.8805217Z return forward_fn(*input_tensors) 2025-08-14T21:47:50.8805727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:50.8806298Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:50.8806818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:50.8807344Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:50.8807825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:50.8808506Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:50.8808805Z 2025-08-14T21:47:50.8808905Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8809174Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8809473Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8809912Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8810322Z return mod(**inputs) 2025-08-14T21:47:50.8810770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8811237Z outputs = self.fnet( 2025-08-14T21:47:50.8811666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8812147Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8812615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8813100Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8813556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8814001Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8814477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8815002Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8815496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8816002Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8816481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8816985Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8817187Z 2025-08-14T21:47:50.8817319Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8817763Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8822391Z return mod(**inputs) 2025-08-14T21:47:50.8822836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8823305Z outputs = self.fnet( 2025-08-14T21:47:50.8823741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8824209Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8824677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8825167Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8825618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8826078Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8826562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8827069Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8827565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8828049Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8828531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8829041Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8829239Z 2025-08-14T21:47:50.8829370Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8829871Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8830280Z return mod(**inputs) 2025-08-14T21:47:50.8830706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8831175Z outputs = self.fnet( 2025-08-14T21:47:50.8831608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8832083Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8832538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8833126Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8833606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8834053Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8834537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8835063Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8835567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8836047Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8836531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8837080Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8837310Z 2025-08-14T21:47:50.8837459Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8837926Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8838328Z return mod(**inputs) 2025-08-14T21:47:50.8838767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8839222Z outputs = self.fnet( 2025-08-14T21:47:50.8839658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8840130Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8840591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8841073Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8841599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8842049Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8842518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8843024Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8843520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8844005Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8844475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8844982Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8845194Z 2025-08-14T21:47:50.8845296Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8845590Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8846025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8846426Z return mod(**inputs) 2025-08-14T21:47:50.8846911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8851799Z outputs = self.fnet( 2025-08-14T21:47:50.8852234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8852709Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8853178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8853662Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8854125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8854585Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8855055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:50.8855550Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:50.8856058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:50.8856559Z return forward_fn(*input_tensors) 2025-08-14T21:47:50.8857060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:50.8857628Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:50.8858157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:50.8858785Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:50.8859253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:50.8859862Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:50.8860157Z 2025-08-14T21:47:50.8860271Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8860534Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8860820Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8861287Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8861778Z return mod(**inputs) 2025-08-14T21:47:50.8862270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8862737Z outputs = self.fnet( 2025-08-14T21:47:50.8863179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8863646Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8864117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8864613Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8865070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8865510Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8865989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8866542Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8867044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8867523Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8867997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8868507Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8868706Z 2025-08-14T21:47:50.8868839Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8869391Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8869793Z return mod(**inputs) 2025-08-14T21:47:50.8870226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8870740Z outputs = self.fnet( 2025-08-14T21:47:50.8871171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8871648Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8872106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8872592Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8873052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8873506Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8873974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8874480Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8874978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8875456Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8875928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8880659Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8880857Z 2025-08-14T21:47:50.8881031Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8881553Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8881958Z return mod(**inputs) 2025-08-14T21:47:50.8882399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8882875Z outputs = self.fnet( 2025-08-14T21:47:50.8883302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8883782Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8884248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8884740Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8885190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8885656Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8886140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8886642Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8887140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8887625Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8888102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8888601Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8888809Z 2025-08-14T21:47:50.8888939Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8889383Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8889779Z return mod(**inputs) 2025-08-14T21:47:50.8890215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8890809Z outputs = self.fnet( 2025-08-14T21:47:50.8891300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8891765Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8892237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8892725Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8893184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8893632Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8894109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8894616Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8895108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8895592Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8896066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8896574Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8896768Z 2025-08-14T21:47:50.8896871Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8897158Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8897636Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8898033Z return mod(**inputs) 2025-08-14T21:47:50.8898474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8898974Z outputs = self.fnet( 2025-08-14T21:47:50.8899414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8899880Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8900346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8900836Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8901286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8901739Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8902216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:50.8902704Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:50.8903201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:50.8903703Z return forward_fn(*input_tensors) 2025-08-14T21:47:50.8904225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:50.8904790Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:50.8913803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:50.8914501Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:50.8915125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:50.8915874Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:50.8916281Z 2025-08-14T21:47:50.8916391Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8916703Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8916994Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8917483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8917889Z return mod(**inputs) 2025-08-14T21:47:50.8918330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8918798Z outputs = self.fnet( 2025-08-14T21:47:50.8919227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8921427Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8921894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8922381Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8922841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8923290Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8923767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8924266Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8924768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8925254Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8925723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8926268Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8926469Z 2025-08-14T21:47:50.8926620Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8927063Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8927457Z return mod(**inputs) 2025-08-14T21:47:50.8927890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8928358Z outputs = self.fnet( 2025-08-14T21:47:50.8928789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8929249Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8929711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8930206Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8930659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8931111Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8931583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8932093Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8932583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8933067Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8933547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8934048Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8934335Z 2025-08-14T21:47:50.8934467Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8934972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8935387Z return mod(**inputs) 2025-08-14T21:47:50.8935818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8936350Z outputs = self.fnet( 2025-08-14T21:47:50.8936789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8937261Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8937716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8938204Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8938662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8939163Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8939641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8940149Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8940649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8941128Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8941623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8942132Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8942328Z 2025-08-14T21:47:50.8942465Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8942899Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8943335Z return mod(**inputs) 2025-08-14T21:47:50.8943774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8944268Z outputs = self.fnet( 2025-08-14T21:47:50.8944698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8945174Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8945640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8946118Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8946574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8947018Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8947483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8947991Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8948486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8953889Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8954362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8954869Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8955067Z 2025-08-14T21:47:50.8955177Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8955458Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8955899Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8956302Z return mod(**inputs) 2025-08-14T21:47:50.8956743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8957200Z outputs = self.fnet( 2025-08-14T21:47:50.8957636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8958106Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8958693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8959183Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8959640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8960087Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8960558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:50.8961050Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:50.8961629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:50.8962130Z return forward_fn(*input_tensors) 2025-08-14T21:47:50.8962635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:50.8963274Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:50.8963857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:50.8964369Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:50.8964845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:50.8965411Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:50.8965747Z 2025-08-14T21:47:50.8965854Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8966109Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.8966443Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8966888Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8967293Z return mod(**inputs) 2025-08-14T21:47:50.8967776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8968249Z outputs = self.fnet( 2025-08-14T21:47:50.8968687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8969152Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8969616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8970106Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8970562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8971013Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8971488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8971998Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8972487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8972971Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8973446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8973957Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8974158Z 2025-08-14T21:47:50.8974292Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8974735Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8975140Z return mod(**inputs) 2025-08-14T21:47:50.8975576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8976081Z outputs = self.fnet( 2025-08-14T21:47:50.8976517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8976990Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8977444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8982101Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8982558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8983019Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8983491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8983999Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8984504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8984986Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8985461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8985977Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8986176Z 2025-08-14T21:47:50.8986317Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8986752Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8987180Z return mod(**inputs) 2025-08-14T21:47:50.8987618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8988104Z outputs = self.fnet( 2025-08-14T21:47:50.8988534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8989014Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8989481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8989961Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8990417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8990868Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8991346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8991839Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.8992409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.8992946Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.8993425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.8993939Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.8994154Z 2025-08-14T21:47:50.8994287Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.8994739Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.8995138Z return mod(**inputs) 2025-08-14T21:47:50.8995584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.8996073Z outputs = self.fnet( 2025-08-14T21:47:50.8996511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.8997018Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.8997528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.8998013Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.8998459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.8998901Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.8999376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.8999875Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9000365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9000845Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9001435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9001945Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9002141Z 2025-08-14T21:47:50.9002244Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9002527Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9002970Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9003361Z return mod(**inputs) 2025-08-14T21:47:50.9003799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9004292Z outputs = self.fnet( 2025-08-14T21:47:50.9004728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9005189Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9005682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9006173Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9006624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9011299Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9011777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:50.9012265Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:50.9012761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:50.9013258Z return forward_fn(*input_tensors) 2025-08-14T21:47:50.9013777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:50.9014336Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:50.9014867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:50.9015394Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:50.9015863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:50.9016420Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:50.9016720Z 2025-08-14T21:47:50.9016822Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9017081Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9017518Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9017957Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9018359Z return mod(**inputs) 2025-08-14T21:47:50.9018799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9019320Z outputs = self.fnet( 2025-08-14T21:47:50.9019760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9020241Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9020705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9021259Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9021778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9022228Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9035018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9040099Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9040651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9041155Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9041755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9042274Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9042552Z 2025-08-14T21:47:50.9042700Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9043164Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9043726Z return mod(**inputs) 2025-08-14T21:47:50.9044175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9044685Z outputs = self.fnet( 2025-08-14T21:47:50.9045129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9045602Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9046074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9046562Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9047018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9047462Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9047941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9048457Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9049405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9049898Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9050473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9051049Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9051248Z 2025-08-14T21:47:50.9051382Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9051833Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9052239Z return mod(**inputs) 2025-08-14T21:47:50.9052676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9053144Z outputs = self.fnet( 2025-08-14T21:47:50.9053584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9054064Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9054660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9055160Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9055619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9056076Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9056550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9057295Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9058059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9058553Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9059194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9059773Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9059973Z 2025-08-14T21:47:50.9060112Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9060552Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9060954Z return mod(**inputs) 2025-08-14T21:47:50.9061391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9061863Z outputs = self.fnet( 2025-08-14T21:47:50.9062342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9062856Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9063548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9064221Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9072828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9073432Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9074065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9074717Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9075378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9076024Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9076514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9077026Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9077228Z 2025-08-14T21:47:50.9077333Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9077636Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9078077Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9078483Z return mod(**inputs) 2025-08-14T21:47:50.9078913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9081661Z outputs = self.fnet( 2025-08-14T21:47:50.9082089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9082570Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9083042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9083536Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9084001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9084540Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9085016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:50.9085498Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:50.9086003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:50.9086504Z return forward_fn(*input_tensors) 2025-08-14T21:47:50.9087019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:50.9087579Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:50.9088113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:50.9088640Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:50.9089115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:50.9089688Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:50.9089992Z 2025-08-14T21:47:50.9090096Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9090353Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9090633Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9091097Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9091537Z return mod(**inputs) 2025-08-14T21:47:50.9091994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9092489Z outputs = self.fnet( 2025-08-14T21:47:50.9092936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9093420Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9093955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9095488Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9096323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9096787Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9097279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9097806Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9099865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9100404Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9100895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9101412Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9101622Z 2025-08-14T21:47:50.9101758Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9104363Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9104768Z return mod(**inputs) 2025-08-14T21:47:50.9105218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9105705Z outputs = self.fnet( 2025-08-14T21:47:50.9106153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9106625Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9107177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9107682Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9112407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9112886Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9113376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9113890Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9114393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9114881Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9115364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9115886Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9116086Z 2025-08-14T21:47:50.9116221Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9116666Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9117072Z return mod(**inputs) 2025-08-14T21:47:50.9117518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9117981Z outputs = self.fnet( 2025-08-14T21:47:50.9118476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9118947Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9119438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9120178Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9120646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9121112Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9121661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9122174Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9122748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9123282Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9123760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9124275Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9124475Z 2025-08-14T21:47:50.9124613Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9125058Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9125468Z return mod(**inputs) 2025-08-14T21:47:50.9125917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9126381Z outputs = self.fnet( 2025-08-14T21:47:50.9126820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9127345Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9127824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9128314Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9128780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9129236Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9129809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9130311Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9130814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9131304Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9131779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9132303Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9132512Z 2025-08-14T21:47:50.9132616Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9132913Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9133362Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9133784Z return mod(**inputs) 2025-08-14T21:47:50.9134230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9134695Z outputs = self.fnet( 2025-08-14T21:47:50.9135122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9135599Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9136065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9136582Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9137043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9141609Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9142099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:50.9142587Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:50.9143100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:50.9143603Z return forward_fn(*input_tensors) 2025-08-14T21:47:50.9144099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:50.9144665Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:50.9145193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:50.9145706Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:50.9146192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:50.9146767Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:50.9147062Z 2025-08-14T21:47:50.9147171Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9147428Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9147721Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9148169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9148566Z return mod(**inputs) 2025-08-14T21:47:50.9149439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9149921Z outputs = self.fnet( 2025-08-14T21:47:50.9150362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9150836Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9151428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9152005Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9152513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9152965Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9153446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9153958Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9154449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9154938Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9155413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9155932Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9156133Z 2025-08-14T21:47:50.9156313Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9156758Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9157165Z return mod(**inputs) 2025-08-14T21:47:50.9157594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9158060Z outputs = self.fnet( 2025-08-14T21:47:50.9158543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9159016Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9159510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9160003Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9160522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9160976Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9161540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9162045Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9162540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9163028Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9163504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9164022Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9164224Z 2025-08-14T21:47:50.9164366Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9164815Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9165236Z return mod(**inputs) 2025-08-14T21:47:50.9165672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9166136Z outputs = self.fnet( 2025-08-14T21:47:50.9170761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9171247Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9171718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9172206Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9172676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9173135Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9173684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9174192Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9174693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9175178Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9175648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9176158Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9176354Z 2025-08-14T21:47:50.9176498Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9176949Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9177341Z return mod(**inputs) 2025-08-14T21:47:50.9177783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9178258Z outputs = self.fnet( 2025-08-14T21:47:50.9178682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9179166Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9179633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9180148Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9180601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9181193Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9181675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9182175Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9182678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9183156Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9183636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9184136Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9184343Z 2025-08-14T21:47:50.9184443Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9184733Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9185177Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9185576Z return mod(**inputs) 2025-08-14T21:47:50.9186019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9186492Z outputs = self.fnet( 2025-08-14T21:47:50.9186916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9187398Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9187860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9188348Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9188804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9189264Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9189743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:50.9190227Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:50.9190796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:50.9191298Z return forward_fn(*input_tensors) 2025-08-14T21:47:50.9191814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:50.9192373Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:50.9192904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:50.9193431Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:50.9193914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:50.9194479Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:50.9194789Z 2025-08-14T21:47:50.9194900Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9203586Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9203940Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9204533Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9205062Z return mod(**inputs) 2025-08-14T21:47:50.9205651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9206273Z outputs = self.fnet( 2025-08-14T21:47:50.9206772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9207249Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9207732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9208216Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9208679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9209132Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9209601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9210223Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9210722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9211207Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9211675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9212188Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9212386Z 2025-08-14T21:47:50.9212530Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9212968Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9213371Z return mod(**inputs) 2025-08-14T21:47:50.9213805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9214272Z outputs = self.fnet( 2025-08-14T21:47:50.9214701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9215178Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9215639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9216124Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9216587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9217115Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9217593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9218095Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9218600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9219092Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9219573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9220088Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9220296Z 2025-08-14T21:47:50.9220427Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9220871Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9221269Z return mod(**inputs) 2025-08-14T21:47:50.9221722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9222195Z outputs = self.fnet( 2025-08-14T21:47:50.9222632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9223100Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9223569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9224103Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9224672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9225168Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9225660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9226180Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9226677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9227168Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9227651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9228158Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9228361Z 2025-08-14T21:47:50.9228492Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9229002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9229406Z return mod(**inputs) 2025-08-14T21:47:50.9229832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9230303Z outputs = self.fnet( 2025-08-14T21:47:50.9230735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9231207Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9231664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9232153Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9232612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9233057Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9233530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9234041Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9234598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9235075Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9235557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9236066Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9236262Z 2025-08-14T21:47:50.9236368Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9236655Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9237099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9237505Z return mod(**inputs) 2025-08-14T21:47:50.9237936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9238411Z outputs = self.fnet( 2025-08-14T21:47:50.9245131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9245620Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9246082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9246574Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9247036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9247489Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9248008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:50.9248498Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:50.9249402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:50.9249899Z return forward_fn(*input_tensors) 2025-08-14T21:47:50.9250416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:50.9250994Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:50.9251519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:50.9252037Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:50.9252521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:50.9253098Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:50.9253474Z 2025-08-14T21:47:50.9253598Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9253903Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9254198Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9254653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9255053Z return mod(**inputs) 2025-08-14T21:47:50.9255504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9255974Z outputs = self.fnet( 2025-08-14T21:47:50.9256413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9256895Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9257361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9257904Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9258355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9258943Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9259428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9259942Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9260436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9260924Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9261407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9261915Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9262129Z 2025-08-14T21:47:50.9262260Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9262707Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9263114Z return mod(**inputs) 2025-08-14T21:47:50.9263554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9264023Z outputs = self.fnet( 2025-08-14T21:47:50.9264455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9264920Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9265386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9265917Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9266378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9266860Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9267337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9272046Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9272549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9273029Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9273517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9274027Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9274229Z 2025-08-14T21:47:50.9274363Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9274813Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9275221Z return mod(**inputs) 2025-08-14T21:47:50.9275660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9276130Z outputs = self.fnet( 2025-08-14T21:47:50.9276570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9277047Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9277503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9277992Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9278449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9278907Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9279379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9279888Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9280448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9280929Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9281498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9282014Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9282282Z 2025-08-14T21:47:50.9282422Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9282906Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9283316Z return mod(**inputs) 2025-08-14T21:47:50.9283753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9284227Z outputs = self.fnet( 2025-08-14T21:47:50.9284651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9285129Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9285588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9286070Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9286524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9287020Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9287498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9288035Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9288537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9289069Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9289548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9290054Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9290262Z 2025-08-14T21:47:50.9290365Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9290655Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9291093Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9291497Z return mod(**inputs) 2025-08-14T21:47:50.9291935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9292405Z outputs = self.fnet( 2025-08-14T21:47:50.9292833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9293314Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9293780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9294262Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9294718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9295165Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9295639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:50.9296124Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:50.9296628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:50.9301350Z return forward_fn(*input_tensors) 2025-08-14T21:47:50.9301875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:50.9302499Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:50.9303029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:50.9303550Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:50.9304014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:50.9304582Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:50.9304887Z 2025-08-14T21:47:50.9304991Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9305255Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9305536Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9305989Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9306403Z return mod(**inputs) 2025-08-14T21:47:50.9306848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9307326Z outputs = self.fnet( 2025-08-14T21:47:50.9307766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9308244Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9308705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9309228Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9309687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9310159Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9310627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9311135Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9311751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9312227Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9312705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9313210Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9313410Z 2025-08-14T21:47:50.9313553Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9313988Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9314396Z return mod(**inputs) 2025-08-14T21:47:50.9314840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9315303Z outputs = self.fnet( 2025-08-14T21:47:50.9315759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9316254Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9316717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9317199Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9317654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9318117Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9318593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9319097Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9319657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9320195Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9320665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9321176Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9321471Z 2025-08-14T21:47:50.9321605Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9322049Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9322451Z return mod(**inputs) 2025-08-14T21:47:50.9322893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9323362Z outputs = self.fnet( 2025-08-14T21:47:50.9323787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9324269Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9324736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9325231Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9329908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9330364Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9330843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9331391Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9331891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9332416Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9332898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9333399Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9333606Z 2025-08-14T21:47:50.9333735Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9334172Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9334580Z return mod(**inputs) 2025-08-14T21:47:50.9335007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9335472Z outputs = self.fnet( 2025-08-14T21:47:50.9335906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9336373Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9336844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9337330Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9337787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9338228Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9338700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9339202Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9339702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9340251Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9340786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9341348Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9341547Z 2025-08-14T21:47:50.9341648Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9341938Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9342379Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9342778Z return mod(**inputs) 2025-08-14T21:47:50.9343208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9343677Z outputs = self.fnet( 2025-08-14T21:47:50.9344110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9344581Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9345050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9345545Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9346010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9346454Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9346935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:50.9347427Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:50.9347932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:50.9348452Z return forward_fn(*input_tensors) 2025-08-14T21:47:50.9349371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:50.9350046Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:50.9350564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:50.9351089Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:50.9351561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:50.9352130Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:50.9352429Z 2025-08-14T21:47:50.9352531Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9352800Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9353093Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9353554Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9353968Z return mod(**inputs) 2025-08-14T21:47:50.9354412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9363273Z outputs = self.fnet( 2025-08-14T21:47:50.9363851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9364487Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9365102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9365749Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9366352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9366876Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9367356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9367855Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9368456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9368938Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9371612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9372116Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9372325Z 2025-08-14T21:47:50.9372463Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9372903Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9373309Z return mod(**inputs) 2025-08-14T21:47:50.9373743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9374215Z outputs = self.fnet( 2025-08-14T21:47:50.9374647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9375121Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9375581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9376075Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9376533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9376975Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9379109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9379626Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9380158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9380641Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9381124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9381636Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9381836Z 2025-08-14T21:47:50.9381969Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9382413Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9382825Z return mod(**inputs) 2025-08-14T21:47:50.9383253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9383826Z outputs = self.fnet( 2025-08-14T21:47:50.9384340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9384818Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9385284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9385778Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9386240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9386692Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9387161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9387668Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9388174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9388708Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9389193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9389763Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9389961Z 2025-08-14T21:47:50.9390100Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9390537Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9390939Z return mod(**inputs) 2025-08-14T21:47:50.9391372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9391834Z outputs = self.fnet( 2025-08-14T21:47:50.9392263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9392740Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9393198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9393684Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9394142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9394588Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9395060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:50.9395557Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:50.9396051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:50.9396562Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:50.9397028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:50.9397562Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:50.9397768Z 2025-08-14T21:47:50.9397868Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9402414Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9402861Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9403263Z return mod(**inputs) 2025-08-14T21:47:50.9403700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:50.9404159Z outputs = self.fnet( 2025-08-14T21:47:50.9404597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:50.9405074Z encoder_outputs = self.encoder( 2025-08-14T21:47:50.9405539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:50.9406029Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:50.9406496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:50.9406951Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:50.9407430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:50.9407911Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:50.9408417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:50.9408915Z return forward_fn(*input_tensors) 2025-08-14T21:47:50.9409425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:50.9409996Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:50.9410526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:50.9411042Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:50.9411569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:50.9412142Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:50.9412443Z 2025-08-14T21:47:50.9412546Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9412871Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9413157Z cudagraph partition due to non gpu ops 2025-08-14T21:47:50.9413447Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:50.9413899Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:50.9414296Z return mod(**inputs) 2025-08-14T21:47:50.9414748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 686, in forward 2025-08-14T21:47:50.9415383Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:47:50.9415692Z 2025-08-14T21:47:57.3965557Z Compilation time (from dynamo_timed): 15.184697942 2025-08-14T21:47:57.4051877Z pass 2025-08-14T21:47:57.4053284Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:57.4058815Z TIMING: _recursive_pre_grad_passes:0.03537 _recursive_joint_graph_passes:0.30875 _recursive_post_grad_passes:0.11086 async_compile.wait:1.01179 code_gen:5.88856 inductor_compile:9.21228 backend_compile:12.67876 gc:0.00145 entire_frame_compile:15.1847 total_wall_time:15.1847 2025-08-14T21:47:57.4060258Z STATS: call_* op count: 232 | FakeTensorMode.__torch_dispatch__:14364 | FakeTensor.__torch_dispatch__:3342 | ProxyTorchDispatchMode.__torch_dispatch__:2923 2025-08-14T21:47:57.4060969Z Dynamo produced 1 graphs covering 232 ops with 0 graph breaks (0 unique) 2025-08-14T21:48:04.1733021Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:48:04.1734113Z from pkg_resources import resource_filename 2025-08-14T21:48:04.9786039Z 2025-08-14T21:48:07.1882361Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:48:07.1882705Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:48:07.1909995Z cpu eval LayoutLMForMaskedLM 2025-08-14T21:48:08.2640971Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:08.8225585Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:09.3907684Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:25.6895881Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6896262Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6900618Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6900940Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6901203Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6901458Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6901699Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6902459Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6902721Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6902958Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6903255Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.6903732Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.6904159Z return mod(**inputs) 2025-08-14T21:48:25.6904636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6905123Z return func(*args, **kwargs) 2025-08-14T21:48:25.6906014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6906479Z return func(*args, **kwargs) 2025-08-14T21:48:25.6906910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.6907402Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.6907917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:48:25.6908434Z outputs = self.layoutlm( 2025-08-14T21:48:25.6908879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6909350Z return func(*args, **kwargs) 2025-08-14T21:48:25.6909925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6910392Z return func(*args, **kwargs) 2025-08-14T21:48:25.6910807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.6911255Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.6911770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:25.6912284Z encoder_outputs = self.encoder( 2025-08-14T21:48:25.6912745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6913273Z return func(*args, **kwargs) 2025-08-14T21:48:25.6913722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6914340Z return func(*args, **kwargs) 2025-08-14T21:48:25.6914942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6915501Z return func(*args, **kwargs) 2025-08-14T21:48:25.6915836Z [Previous line repeated 1 more time] 2025-08-14T21:48:25.6916416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.6917078Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.6917806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:25.6918422Z layer_outputs = layer_module( 2025-08-14T21:48:25.6919026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:25.6919599Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:25.6924433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6925042Z return func(*args, **kwargs) 2025-08-14T21:48:25.6925604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6926311Z return func(*args, **kwargs) 2025-08-14T21:48:25.6926917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6927554Z return func(*args, **kwargs) 2025-08-14T21:48:25.6928037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:25.6928639Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:25.6929299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:25.6929918Z return forward_fn(*input_tensors) 2025-08-14T21:48:25.6930725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:25.6931518Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:25.6932356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:25.6933016Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:25.6933609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:25.6934149Z return self.act(input) 2025-08-14T21:48:25.6934357Z 2025-08-14T21:48:25.6934462Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6934810Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6935066Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6935316Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6935570Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6935815Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6936067Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6936321Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6936568Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6936805Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6937053Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6937339Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.6937797Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.6938266Z return mod(**inputs) 2025-08-14T21:48:25.6938838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6939315Z return func(*args, **kwargs) 2025-08-14T21:48:25.6939796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6940265Z return func(*args, **kwargs) 2025-08-14T21:48:25.6940699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.6941136Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.6941639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:48:25.6942143Z outputs = self.layoutlm( 2025-08-14T21:48:25.6942590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6943040Z return func(*args, **kwargs) 2025-08-14T21:48:25.6943482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6943937Z return func(*args, **kwargs) 2025-08-14T21:48:25.6944352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.6944795Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.6945298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:25.6945798Z encoder_outputs = self.encoder( 2025-08-14T21:48:25.6946254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6946710Z return func(*args, **kwargs) 2025-08-14T21:48:25.6947152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6947612Z return func(*args, **kwargs) 2025-08-14T21:48:25.6948043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6948500Z return func(*args, **kwargs) 2025-08-14T21:48:25.6957299Z [Previous line repeated 1 more time] 2025-08-14T21:48:25.6958004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.6958578Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.6959255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:25.6959917Z layer_outputs = layer_module( 2025-08-14T21:48:25.6960344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:25.6960794Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:25.6961325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6961777Z return func(*args, **kwargs) 2025-08-14T21:48:25.6962224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6962687Z return func(*args, **kwargs) 2025-08-14T21:48:25.6963131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6965850Z return func(*args, **kwargs) 2025-08-14T21:48:25.6966336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:25.6966854Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:25.6967399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:25.6967945Z return forward_fn(*input_tensors) 2025-08-14T21:48:25.6968482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:25.6969122Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:25.6969688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:25.6970242Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:25.6970726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:25.6971150Z return self.act(input) 2025-08-14T21:48:25.6971290Z 2025-08-14T21:48:25.6971387Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6971646Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6971893Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6972137Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6972388Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6972634Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6972879Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6973122Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6973370Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6973618Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6973855Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.6974137Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.6974585Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.6974988Z return mod(**inputs) 2025-08-14T21:48:25.6975422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6975881Z return func(*args, **kwargs) 2025-08-14T21:48:25.6976319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6976774Z return func(*args, **kwargs) 2025-08-14T21:48:25.6977192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.6977633Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.6978366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:48:25.6978871Z outputs = self.layoutlm( 2025-08-14T21:48:25.6979313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6979770Z return func(*args, **kwargs) 2025-08-14T21:48:25.6980208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6980688Z return func(*args, **kwargs) 2025-08-14T21:48:25.6981105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.6981551Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.6982069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:25.6982576Z encoder_outputs = self.encoder( 2025-08-14T21:48:25.6983032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6983486Z return func(*args, **kwargs) 2025-08-14T21:48:25.6983931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6984377Z return func(*args, **kwargs) 2025-08-14T21:48:25.6984816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6985303Z return func(*args, **kwargs) 2025-08-14T21:48:25.6985536Z [Previous line repeated 1 more time] 2025-08-14T21:48:25.6985968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.6986437Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.6986942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:25.6987435Z layer_outputs = layer_module( 2025-08-14T21:48:25.6987864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:25.6988317Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:25.6988782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6989231Z return func(*args, **kwargs) 2025-08-14T21:48:25.6989672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6990128Z return func(*args, **kwargs) 2025-08-14T21:48:25.6990562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.6991014Z return func(*args, **kwargs) 2025-08-14T21:48:25.6991494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:25.6992011Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:25.6992562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:25.6997303Z return forward_fn(*input_tensors) 2025-08-14T21:48:25.6997837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:25.6998444Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:25.6998998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:25.6999551Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:25.7000090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:25.7000527Z return self.act(input) 2025-08-14T21:48:25.7000709Z 2025-08-14T21:48:25.7000810Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7001066Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7001396Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7001637Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7001888Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7002268Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7002506Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7002751Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7002997Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7003235Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7003483Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7003776Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.7004241Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.7004654Z return mod(**inputs) 2025-08-14T21:48:25.7005088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7005550Z return func(*args, **kwargs) 2025-08-14T21:48:25.7006247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7006751Z return func(*args, **kwargs) 2025-08-14T21:48:25.7007306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7007745Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7008266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:48:25.7008773Z outputs = self.layoutlm( 2025-08-14T21:48:25.7009267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7009722Z return func(*args, **kwargs) 2025-08-14T21:48:25.7010165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7010622Z return func(*args, **kwargs) 2025-08-14T21:48:25.7011036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7011476Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7011974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:25.7012480Z encoder_outputs = self.encoder( 2025-08-14T21:48:25.7012931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7013388Z return func(*args, **kwargs) 2025-08-14T21:48:25.7013826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7014282Z return func(*args, **kwargs) 2025-08-14T21:48:25.7014710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7015165Z return func(*args, **kwargs) 2025-08-14T21:48:25.7015392Z [Previous line repeated 1 more time] 2025-08-14T21:48:25.7015822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7016269Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7016772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:25.7017281Z layer_outputs = layer_module( 2025-08-14T21:48:25.7017754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:25.7018210Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:25.7018684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7019147Z return func(*args, **kwargs) 2025-08-14T21:48:25.7019587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7020049Z return func(*args, **kwargs) 2025-08-14T21:48:25.7020499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7020948Z return func(*args, **kwargs) 2025-08-14T21:48:25.7021492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:25.7026112Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:25.7026628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:25.7027125Z return forward_fn(*input_tensors) 2025-08-14T21:48:25.7027662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:25.7028270Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:25.7028836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:25.7029426Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:25.7029911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:25.7030366Z return self.act(input) 2025-08-14T21:48:25.7030508Z 2025-08-14T21:48:25.7030607Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7030868Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7031120Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7031358Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7031604Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7031847Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7032089Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7032324Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7032570Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7032812Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7033044Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7033325Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.7033772Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.7034170Z return mod(**inputs) 2025-08-14T21:48:25.7034613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7035076Z return func(*args, **kwargs) 2025-08-14T21:48:25.7035520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7036027Z return func(*args, **kwargs) 2025-08-14T21:48:25.7036519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7036956Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7037453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:48:25.7037952Z outputs = self.layoutlm( 2025-08-14T21:48:25.7038399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7038855Z return func(*args, **kwargs) 2025-08-14T21:48:25.7039337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7039800Z return func(*args, **kwargs) 2025-08-14T21:48:25.7040276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7040712Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7041311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:25.7041827Z encoder_outputs = self.encoder( 2025-08-14T21:48:25.7042292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7042746Z return func(*args, **kwargs) 2025-08-14T21:48:25.7043197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7043658Z return func(*args, **kwargs) 2025-08-14T21:48:25.7044104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7044562Z return func(*args, **kwargs) 2025-08-14T21:48:25.7044800Z [Previous line repeated 1 more time] 2025-08-14T21:48:25.7045239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7045673Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7046174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:25.7046714Z layer_outputs = layer_module( 2025-08-14T21:48:25.7047145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:25.7047618Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:25.7048097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7048564Z return func(*args, **kwargs) 2025-08-14T21:48:25.7049351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7049821Z return func(*args, **kwargs) 2025-08-14T21:48:25.7050325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7054953Z return func(*args, **kwargs) 2025-08-14T21:48:25.7055435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:25.7055954Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:25.7056463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:25.7056946Z return forward_fn(*input_tensors) 2025-08-14T21:48:25.7057491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:25.7058098Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:25.7058661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:25.7059207Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:25.7059686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:25.7060113Z return self.act(input) 2025-08-14T21:48:25.7060250Z 2025-08-14T21:48:25.7060354Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7060600Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7060849Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7061089Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7061442Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7061690Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7061938Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7062172Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7062419Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7062662Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7062895Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7063175Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.7063632Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.7064043Z return mod(**inputs) 2025-08-14T21:48:25.7064476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7064991Z return func(*args, **kwargs) 2025-08-14T21:48:25.7065518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7065976Z return func(*args, **kwargs) 2025-08-14T21:48:25.7066400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7066844Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7067351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:48:25.7067847Z outputs = self.layoutlm( 2025-08-14T21:48:25.7068332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7068797Z return func(*args, **kwargs) 2025-08-14T21:48:25.7069291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7069780Z return func(*args, **kwargs) 2025-08-14T21:48:25.7070206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7070650Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7071145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:25.7071655Z encoder_outputs = self.encoder( 2025-08-14T21:48:25.7072118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7072584Z return func(*args, **kwargs) 2025-08-14T21:48:25.7073020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7073475Z return func(*args, **kwargs) 2025-08-14T21:48:25.7073920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7074374Z return func(*args, **kwargs) 2025-08-14T21:48:25.7074609Z [Previous line repeated 1 more time] 2025-08-14T21:48:25.7075052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7075491Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7075985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:25.7076485Z layer_outputs = layer_module( 2025-08-14T21:48:25.7076918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:25.7077368Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:25.7077837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7078296Z return func(*args, **kwargs) 2025-08-14T21:48:25.7078787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7079272Z return func(*args, **kwargs) 2025-08-14T21:48:25.7083972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7084438Z return func(*args, **kwargs) 2025-08-14T21:48:25.7084918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:25.7085438Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:25.7085945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:25.7086444Z return forward_fn(*input_tensors) 2025-08-14T21:48:25.7086976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:25.7087583Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:25.7088147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:25.7088696Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:25.7089169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:25.7089593Z return self.act(input) 2025-08-14T21:48:25.7089735Z 2025-08-14T21:48:25.7089839Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7090120Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7090370Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7090616Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7090889Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7091130Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7091374Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7091624Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7091863Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7092107Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7092356Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7092632Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.7093083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.7093493Z return mod(**inputs) 2025-08-14T21:48:25.7093975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7094528Z return func(*args, **kwargs) 2025-08-14T21:48:25.7094978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7095448Z return func(*args, **kwargs) 2025-08-14T21:48:25.7095868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7096310Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7096813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:48:25.7097314Z outputs = self.layoutlm( 2025-08-14T21:48:25.7097750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7098270Z return func(*args, **kwargs) 2025-08-14T21:48:25.7098722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7099170Z return func(*args, **kwargs) 2025-08-14T21:48:25.7099590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7100039Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7100599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:25.7101095Z encoder_outputs = self.encoder( 2025-08-14T21:48:25.7101554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7102009Z return func(*args, **kwargs) 2025-08-14T21:48:25.7102439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7102896Z return func(*args, **kwargs) 2025-08-14T21:48:25.7103339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7103792Z return func(*args, **kwargs) 2025-08-14T21:48:25.7104018Z [Previous line repeated 1 more time] 2025-08-14T21:48:25.7104450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7104899Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7105394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:25.7105904Z layer_outputs = layer_module( 2025-08-14T21:48:25.7106335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:25.7106784Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:25.7107252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7107743Z return func(*args, **kwargs) 2025-08-14T21:48:25.7108192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7117126Z return func(*args, **kwargs) 2025-08-14T21:48:25.7117699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7118296Z return func(*args, **kwargs) 2025-08-14T21:48:25.7118922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:25.7119614Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:25.7120295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:25.7120846Z return forward_fn(*input_tensors) 2025-08-14T21:48:25.7121448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:25.7122044Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:25.7122615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:25.7125347Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:25.7125815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:25.7126240Z return self.act(input) 2025-08-14T21:48:25.7126386Z 2025-08-14T21:48:25.7126485Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7126741Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7127031Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7127283Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7127541Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7127781Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7128024Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7128272Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7128510Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7128754Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7128998Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7129350Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.7129800Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.7130207Z return mod(**inputs) 2025-08-14T21:48:25.7130644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7131100Z return func(*args, **kwargs) 2025-08-14T21:48:25.7131549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7132012Z return func(*args, **kwargs) 2025-08-14T21:48:25.7132430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7132867Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7133378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:48:25.7133878Z outputs = self.layoutlm( 2025-08-14T21:48:25.7134315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7134777Z return func(*args, **kwargs) 2025-08-14T21:48:25.7135216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7135669Z return func(*args, **kwargs) 2025-08-14T21:48:25.7136077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7136552Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7137052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:25.7137720Z encoder_outputs = self.encoder( 2025-08-14T21:48:25.7138188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7138652Z return func(*args, **kwargs) 2025-08-14T21:48:25.7139091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7139540Z return func(*args, **kwargs) 2025-08-14T21:48:25.7139985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7140446Z return func(*args, **kwargs) 2025-08-14T21:48:25.7140672Z [Previous line repeated 1 more time] 2025-08-14T21:48:25.7141110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7141556Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7142056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:25.7142553Z layer_outputs = layer_module( 2025-08-14T21:48:25.7142985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:25.7143437Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:25.7143907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7144364Z return func(*args, **kwargs) 2025-08-14T21:48:25.7144811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7145283Z return func(*args, **kwargs) 2025-08-14T21:48:25.7145715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7146176Z return func(*args, **kwargs) 2025-08-14T21:48:25.7146707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:25.7147229Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:25.7147729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:25.7148222Z return forward_fn(*input_tensors) 2025-08-14T21:48:25.7149079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:25.7149685Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:25.7150249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:25.7150810Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:25.7151295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:25.7151739Z return self.act(input) 2025-08-14T21:48:25.7151921Z 2025-08-14T21:48:25.7152021Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7156458Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7156706Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7156960Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7157205Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7157464Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7157712Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7157962Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7158281Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7158518Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7158761Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7159090Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.7159530Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.7160002Z return mod(**inputs) 2025-08-14T21:48:25.7160442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7160899Z return func(*args, **kwargs) 2025-08-14T21:48:25.7161409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7161867Z return func(*args, **kwargs) 2025-08-14T21:48:25.7162289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7162723Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7163226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:48:25.7163728Z outputs = self.layoutlm( 2025-08-14T21:48:25.7164171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7164624Z return func(*args, **kwargs) 2025-08-14T21:48:25.7165074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7165538Z return func(*args, **kwargs) 2025-08-14T21:48:25.7165951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7166441Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7167022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:25.7167533Z encoder_outputs = self.encoder( 2025-08-14T21:48:25.7167987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7168504Z return func(*args, **kwargs) 2025-08-14T21:48:25.7169026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7169480Z return func(*args, **kwargs) 2025-08-14T21:48:25.7169914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7170371Z return func(*args, **kwargs) 2025-08-14T21:48:25.7170608Z [Previous line repeated 1 more time] 2025-08-14T21:48:25.7171041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7171487Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7171992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:25.7172505Z layer_outputs = layer_module( 2025-08-14T21:48:25.7172935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:25.7173395Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:25.7173869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7174333Z return func(*args, **kwargs) 2025-08-14T21:48:25.7174775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7175254Z return func(*args, **kwargs) 2025-08-14T21:48:25.7175699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7176185Z return func(*args, **kwargs) 2025-08-14T21:48:25.7176665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:25.7177211Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:25.7177712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:25.7178210Z return forward_fn(*input_tensors) 2025-08-14T21:48:25.7178742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:25.7179343Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:25.7179897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:25.7180455Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:25.7180990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:25.7185665Z return self.act(input) 2025-08-14T21:48:25.7185811Z 2025-08-14T21:48:25.7185909Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7186165Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7186416Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7186659Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7186902Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7187146Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7187387Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7187632Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7187882Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7188123Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7188369Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7188651Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.7189101Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.7189506Z return mod(**inputs) 2025-08-14T21:48:25.7189943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7190408Z return func(*args, **kwargs) 2025-08-14T21:48:25.7190910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7191375Z return func(*args, **kwargs) 2025-08-14T21:48:25.7191801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7192251Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7192748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:48:25.7193256Z outputs = self.layoutlm( 2025-08-14T21:48:25.7193713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7194182Z return func(*args, **kwargs) 2025-08-14T21:48:25.7194620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7195083Z return func(*args, **kwargs) 2025-08-14T21:48:25.7195552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7196065Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7196570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:25.7197078Z encoder_outputs = self.encoder( 2025-08-14T21:48:25.7197537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7198023Z return func(*args, **kwargs) 2025-08-14T21:48:25.7198470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7198960Z return func(*args, **kwargs) 2025-08-14T21:48:25.7199436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7199910Z return func(*args, **kwargs) 2025-08-14T21:48:25.7200140Z [Previous line repeated 1 more time] 2025-08-14T21:48:25.7200582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7201014Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7201584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:25.7202087Z layer_outputs = layer_module( 2025-08-14T21:48:25.7202510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:25.7202963Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:25.7203437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7203898Z return func(*args, **kwargs) 2025-08-14T21:48:25.7204338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7204795Z return func(*args, **kwargs) 2025-08-14T21:48:25.7205233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7205684Z return func(*args, **kwargs) 2025-08-14T21:48:25.7206170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:25.7206689Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:25.7207191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:25.7207684Z return forward_fn(*input_tensors) 2025-08-14T21:48:25.7208277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:25.7208881Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:25.7209438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:25.7210037Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:25.7214761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:25.7215193Z return self.act(input) 2025-08-14T21:48:25.7215330Z 2025-08-14T21:48:25.7215432Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7215695Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7215953Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7216204Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7216452Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7216703Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7216950Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7217189Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7217442Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7217691Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7217939Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7218227Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.7218688Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.7219155Z return mod(**inputs) 2025-08-14T21:48:25.7219597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7220065Z return func(*args, **kwargs) 2025-08-14T21:48:25.7220549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7221007Z return func(*args, **kwargs) 2025-08-14T21:48:25.7221435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7221881Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7222395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:48:25.7222895Z outputs = self.layoutlm( 2025-08-14T21:48:25.7223350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7223816Z return func(*args, **kwargs) 2025-08-14T21:48:25.7224294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7224833Z return func(*args, **kwargs) 2025-08-14T21:48:25.7225249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7225692Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7226189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:25.7226694Z encoder_outputs = self.encoder( 2025-08-14T21:48:25.7227152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7227607Z return func(*args, **kwargs) 2025-08-14T21:48:25.7228048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7228558Z return func(*args, **kwargs) 2025-08-14T21:48:25.7228997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7229444Z return func(*args, **kwargs) 2025-08-14T21:48:25.7229671Z [Previous line repeated 1 more time] 2025-08-14T21:48:25.7230164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7230606Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7231107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:25.7231604Z layer_outputs = layer_module( 2025-08-14T21:48:25.7232037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:25.7232487Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:25.7232957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7233412Z return func(*args, **kwargs) 2025-08-14T21:48:25.7233849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7234304Z return func(*args, **kwargs) 2025-08-14T21:48:25.7234749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7235205Z return func(*args, **kwargs) 2025-08-14T21:48:25.7235679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:25.7236200Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:25.7236706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:25.7237232Z return forward_fn(*input_tensors) 2025-08-14T21:48:25.7237760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:25.7238388Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:25.7239011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:25.7248139Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:25.7249118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:25.7249678Z return self.act(input) 2025-08-14T21:48:25.7249847Z 2025-08-14T21:48:25.7249965Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7250259Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7250543Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7250801Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7251058Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7251309Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7251558Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7251801Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7252054Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7252302Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7252543Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7264579Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.7265084Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.7265497Z return mod(**inputs) 2025-08-14T21:48:25.7265983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7266467Z return func(*args, **kwargs) 2025-08-14T21:48:25.7266941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7267402Z return func(*args, **kwargs) 2025-08-14T21:48:25.7267923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7268511Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7269237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:48:25.7269756Z outputs = self.layoutlm( 2025-08-14T21:48:25.7270213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7270679Z return func(*args, **kwargs) 2025-08-14T21:48:25.7271120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7271590Z return func(*args, **kwargs) 2025-08-14T21:48:25.7272017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7272472Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7272983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:25.7273504Z encoder_outputs = self.encoder( 2025-08-14T21:48:25.7273972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7274428Z return func(*args, **kwargs) 2025-08-14T21:48:25.7274877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7275336Z return func(*args, **kwargs) 2025-08-14T21:48:25.7275781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7276279Z return func(*args, **kwargs) 2025-08-14T21:48:25.7276521Z [Previous line repeated 1 more time] 2025-08-14T21:48:25.7276969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7277459Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7277973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:25.7278488Z layer_outputs = layer_module( 2025-08-14T21:48:25.7278932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:25.7279383Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:25.7279867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7280331Z return func(*args, **kwargs) 2025-08-14T21:48:25.7280776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7281352Z return func(*args, **kwargs) 2025-08-14T21:48:25.7281812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7282327Z return func(*args, **kwargs) 2025-08-14T21:48:25.7289067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:25.7289595Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:25.7290110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:25.7290652Z return forward_fn(*input_tensors) 2025-08-14T21:48:25.7291202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:25.7291815Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:25.7292389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:25.7292949Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:25.7293487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:25.7293917Z return self.act(input) 2025-08-14T21:48:25.7294064Z 2025-08-14T21:48:25.7294173Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7294424Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7294680Z cudagraph partition due to non gpu ops 2025-08-14T21:48:25.7294968Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:25.7295415Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:25.7295822Z return mod(**inputs) 2025-08-14T21:48:25.7296270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7296778Z return func(*args, **kwargs) 2025-08-14T21:48:25.7297310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:25.7297780Z return func(*args, **kwargs) 2025-08-14T21:48:25.7298210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:25.7298652Z output = func(self, *args, **kwargs) 2025-08-14T21:48:25.7299154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 776, in forward 2025-08-14T21:48:25.7299658Z masked_lm_loss = loss_fct( 2025-08-14T21:48:25.7299806Z 2025-08-14T21:48:32.3817832Z Compilation time (from dynamo_timed): 21.0565669 2025-08-14T21:48:32.3876970Z pass 2025-08-14T21:48:32.3878489Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:32.3879695Z TIMING: _recursive_pre_grad_passes:0.06182 _recursive_joint_graph_passes:0.67373 _recursive_post_grad_passes:0.1118 async_compile.wait:0.74674 code_gen:5.47892 inductor_compile:9.58192 backend_compile:16.60162 gc:0.00026 entire_frame_compile:21.05657 total_wall_time:21.05657 2025-08-14T21:48:32.3881095Z STATS: call_* op count: 432 | FakeTensorMode.__torch_dispatch__:27394 | FakeTensor.__torch_dispatch__:3961 | ProxyTorchDispatchMode.__torch_dispatch__:6668 2025-08-14T21:48:32.3881835Z Dynamo produced 1 graphs covering 432 ops with 0 graph breaks (0 unique) 2025-08-14T21:48:38.8070845Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:48:38.8071952Z from pkg_resources import resource_filename 2025-08-14T21:48:39.6070721Z 2025-08-14T21:48:41.4614612Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:48:41.4614988Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:48:41.4629342Z cpu eval LayoutLMForSequenceClassification 2025-08-14T21:48:42.3923543Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:42.8846124Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:43.3626650Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:59.0001407Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0001770Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0002050Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0009237Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0009622Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0009929Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0010187Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0010449Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0010698Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0011038Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0011657Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0012177Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0012624Z return mod(**inputs) 2025-08-14T21:48:59.0013067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0013533Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0014261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0014804Z outputs = self.layoutlm( 2025-08-14T21:48:59.0015265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0015762Z return func(*args, **kwargs) 2025-08-14T21:48:59.0016228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0016702Z return func(*args, **kwargs) 2025-08-14T21:48:59.0017135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0017582Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0018096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:59.0018652Z encoder_outputs = self.encoder( 2025-08-14T21:48:59.0019119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0019650Z return func(*args, **kwargs) 2025-08-14T21:48:59.0020086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0020595Z return func(*args, **kwargs) 2025-08-14T21:48:59.0021045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0021511Z return func(*args, **kwargs) 2025-08-14T21:48:59.0021742Z [Previous line repeated 1 more time] 2025-08-14T21:48:59.0022188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0022682Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0023193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:59.0023711Z layer_outputs = layer_module( 2025-08-14T21:48:59.0024146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:59.0024604Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:59.0025075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0025549Z return func(*args, **kwargs) 2025-08-14T21:48:59.0025997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0026457Z return func(*args, **kwargs) 2025-08-14T21:48:59.0026895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0027356Z return func(*args, **kwargs) 2025-08-14T21:48:59.0027844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:59.0032627Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:59.0033151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:59.0033659Z return forward_fn(*input_tensors) 2025-08-14T21:48:59.0034281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:59.0034891Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:59.0035467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:59.0036032Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:59.0036525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:59.0036962Z return self.act(input) 2025-08-14T21:48:59.0037121Z 2025-08-14T21:48:59.0037225Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0037484Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0037733Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0037991Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0038246Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0038588Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0038834Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0039083Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0039332Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0039572Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0039823Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0040129Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0040581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0041030Z return mod(**inputs) 2025-08-14T21:48:59.0041518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0041968Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0042510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0043119Z outputs = self.layoutlm( 2025-08-14T21:48:59.0043608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0044067Z return func(*args, **kwargs) 2025-08-14T21:48:59.0044514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0044977Z return func(*args, **kwargs) 2025-08-14T21:48:59.0045408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0045856Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0046358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:59.0046867Z encoder_outputs = self.encoder( 2025-08-14T21:48:59.0047331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0047796Z return func(*args, **kwargs) 2025-08-14T21:48:59.0048245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0049059Z return func(*args, **kwargs) 2025-08-14T21:48:59.0049516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0049981Z return func(*args, **kwargs) 2025-08-14T21:48:59.0050211Z [Previous line repeated 1 more time] 2025-08-14T21:48:59.0050662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0051105Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0051619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:59.0052121Z layer_outputs = layer_module( 2025-08-14T21:48:59.0052687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:59.0053156Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:59.0053625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0054093Z return func(*args, **kwargs) 2025-08-14T21:48:59.0054552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0055022Z return func(*args, **kwargs) 2025-08-14T21:48:59.0055459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0055916Z return func(*args, **kwargs) 2025-08-14T21:48:59.0056402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:59.0056923Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:59.0061633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:59.0062161Z return forward_fn(*input_tensors) 2025-08-14T21:48:59.0062705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:59.0063313Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:59.0063938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:59.0064554Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:59.0065078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:59.0065502Z return self.act(input) 2025-08-14T21:48:59.0065658Z 2025-08-14T21:48:59.0065767Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0066036Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0066285Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0066534Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0066784Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0067035Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0067281Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0067544Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0067790Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0068034Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0068290Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0068574Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0069021Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0069428Z return mod(**inputs) 2025-08-14T21:48:59.0069842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0070283Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0070781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0071283Z outputs = self.layoutlm( 2025-08-14T21:48:59.0071806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0072327Z return func(*args, **kwargs) 2025-08-14T21:48:59.0072779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0073242Z return func(*args, **kwargs) 2025-08-14T21:48:59.0073666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0074108Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0074671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:59.0075178Z encoder_outputs = self.encoder( 2025-08-14T21:48:59.0075629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0076090Z return func(*args, **kwargs) 2025-08-14T21:48:59.0076533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0076997Z return func(*args, **kwargs) 2025-08-14T21:48:59.0077437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0077906Z return func(*args, **kwargs) 2025-08-14T21:48:59.0078143Z [Previous line repeated 1 more time] 2025-08-14T21:48:59.0078578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0079023Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0079528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:59.0080027Z layer_outputs = layer_module( 2025-08-14T21:48:59.0080463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:59.0080923Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:59.0081543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0082006Z return func(*args, **kwargs) 2025-08-14T21:48:59.0082480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0082951Z return func(*args, **kwargs) 2025-08-14T21:48:59.0083400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0083859Z return func(*args, **kwargs) 2025-08-14T21:48:59.0084344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:59.0084881Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:59.0085392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:59.0085884Z return forward_fn(*input_tensors) 2025-08-14T21:48:59.0094970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:59.0095810Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:59.0096563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:59.0097327Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:59.0097878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:59.0098307Z return self.act(input) 2025-08-14T21:48:59.0098451Z 2025-08-14T21:48:59.0098550Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0098809Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0099063Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0099306Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0099558Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0099804Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0100053Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0100292Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0100540Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0102922Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0103226Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0103511Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0103965Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0104364Z return mod(**inputs) 2025-08-14T21:48:59.0104777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0105218Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0105742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0106241Z outputs = self.layoutlm( 2025-08-14T21:48:59.0106696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0107174Z return func(*args, **kwargs) 2025-08-14T21:48:59.0107618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0108078Z return func(*args, **kwargs) 2025-08-14T21:48:59.0108497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0108935Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0109429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:59.0109968Z encoder_outputs = self.encoder( 2025-08-14T21:48:59.0110436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0110888Z return func(*args, **kwargs) 2025-08-14T21:48:59.0111362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0111826Z return func(*args, **kwargs) 2025-08-14T21:48:59.0112270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0112719Z return func(*args, **kwargs) 2025-08-14T21:48:59.0112952Z [Previous line repeated 1 more time] 2025-08-14T21:48:59.0113395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0113828Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0114340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:59.0114852Z layer_outputs = layer_module( 2025-08-14T21:48:59.0115361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:59.0115870Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:59.0116350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0116815Z return func(*args, **kwargs) 2025-08-14T21:48:59.0117264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0117715Z return func(*args, **kwargs) 2025-08-14T21:48:59.0118170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0118636Z return func(*args, **kwargs) 2025-08-14T21:48:59.0119113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:59.0119634Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:59.0120193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:59.0120689Z return forward_fn(*input_tensors) 2025-08-14T21:48:59.0121348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:59.0121959Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:59.0122530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:59.0123085Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:59.0123567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:59.0123997Z return self.act(input) 2025-08-14T21:48:59.0124137Z 2025-08-14T21:48:59.0124253Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0124517Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0124772Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0125017Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0125261Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0125509Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0125756Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0126003Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0126241Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0126485Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0126727Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0127001Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0127452Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0127884Z return mod(**inputs) 2025-08-14T21:48:59.0128286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0128756Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0129262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0134019Z outputs = self.layoutlm( 2025-08-14T21:48:59.0134464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0134930Z return func(*args, **kwargs) 2025-08-14T21:48:59.0135379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0135830Z return func(*args, **kwargs) 2025-08-14T21:48:59.0136254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0136698Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0137201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:59.0137704Z encoder_outputs = self.encoder( 2025-08-14T21:48:59.0138172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0138637Z return func(*args, **kwargs) 2025-08-14T21:48:59.0139077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0139537Z return func(*args, **kwargs) 2025-08-14T21:48:59.0139986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0140445Z return func(*args, **kwargs) 2025-08-14T21:48:59.0140677Z [Previous line repeated 1 more time] 2025-08-14T21:48:59.0141113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0141560Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0142055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:59.0142616Z layer_outputs = layer_module( 2025-08-14T21:48:59.0143055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:59.0143511Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:59.0143976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0144511Z return func(*args, **kwargs) 2025-08-14T21:48:59.0145005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0145469Z return func(*args, **kwargs) 2025-08-14T21:48:59.0145914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0146377Z return func(*args, **kwargs) 2025-08-14T21:48:59.0146866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:59.0147383Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:59.0147896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:59.0148389Z return forward_fn(*input_tensors) 2025-08-14T21:48:59.0149298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:59.0149976Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:59.0150546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:59.0151140Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:59.0151618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:59.0152048Z return self.act(input) 2025-08-14T21:48:59.0152196Z 2025-08-14T21:48:59.0152295Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0152553Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0152796Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0153038Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0153280Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0153515Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0153761Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0154014Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0154257Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0154494Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0154733Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0155019Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0155468Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0155878Z return mod(**inputs) 2025-08-14T21:48:59.0156294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0156730Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0157239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0157742Z outputs = self.layoutlm( 2025-08-14T21:48:59.0158192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0162824Z return func(*args, **kwargs) 2025-08-14T21:48:59.0163288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0163769Z return func(*args, **kwargs) 2025-08-14T21:48:59.0164306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0164756Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0165265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:59.0165778Z encoder_outputs = self.encoder( 2025-08-14T21:48:59.0166236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0166701Z return func(*args, **kwargs) 2025-08-14T21:48:59.0167148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0167610Z return func(*args, **kwargs) 2025-08-14T21:48:59.0168046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0168512Z return func(*args, **kwargs) 2025-08-14T21:48:59.0168749Z [Previous line repeated 1 more time] 2025-08-14T21:48:59.0169186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0169634Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0170142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:59.0170646Z layer_outputs = layer_module( 2025-08-14T21:48:59.0171068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:59.0171561Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:59.0172035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0172514Z return func(*args, **kwargs) 2025-08-14T21:48:59.0172956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0173494Z return func(*args, **kwargs) 2025-08-14T21:48:59.0173980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0174430Z return func(*args, **kwargs) 2025-08-14T21:48:59.0174911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:59.0175429Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:59.0175931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:59.0176429Z return forward_fn(*input_tensors) 2025-08-14T21:48:59.0176961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:59.0177559Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:59.0178150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:59.0178706Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:59.0179177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:59.0179599Z return self.act(input) 2025-08-14T21:48:59.0179738Z 2025-08-14T21:48:59.0179835Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0180088Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0180340Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0180577Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0180820Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0181077Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0181312Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0181559Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0181881Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0182163Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0182413Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0182697Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0183159Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0183559Z return mod(**inputs) 2025-08-14T21:48:59.0183969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0184420Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0184919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0185425Z outputs = self.layoutlm( 2025-08-14T21:48:59.0185872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0186341Z return func(*args, **kwargs) 2025-08-14T21:48:59.0186781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0187237Z return func(*args, **kwargs) 2025-08-14T21:48:59.0191889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0192343Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0192847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:59.0193392Z encoder_outputs = self.encoder( 2025-08-14T21:48:59.0193862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0194345Z return func(*args, **kwargs) 2025-08-14T21:48:59.0194801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0195260Z return func(*args, **kwargs) 2025-08-14T21:48:59.0195706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0196155Z return func(*args, **kwargs) 2025-08-14T21:48:59.0196387Z [Previous line repeated 1 more time] 2025-08-14T21:48:59.0196824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0197255Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0197756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:59.0198257Z layer_outputs = layer_module( 2025-08-14T21:48:59.0198690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:59.0199132Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:59.0199604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0200066Z return func(*args, **kwargs) 2025-08-14T21:48:59.0200499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0200951Z return func(*args, **kwargs) 2025-08-14T21:48:59.0201456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0201914Z return func(*args, **kwargs) 2025-08-14T21:48:59.0202458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:59.0203035Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:59.0203538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:59.0204073Z return forward_fn(*input_tensors) 2025-08-14T21:48:59.0204605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:59.0205204Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:59.0205761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:59.0206308Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:59.0206786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:59.0207208Z return self.act(input) 2025-08-14T21:48:59.0207354Z 2025-08-14T21:48:59.0207459Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0207707Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0207962Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0208219Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0208466Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0208712Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0208960Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0209196Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0209448Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0209697Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0209935Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0210252Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0210704Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0211116Z return mod(**inputs) 2025-08-14T21:48:59.0211538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0211985Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0212497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0212991Z outputs = self.layoutlm( 2025-08-14T21:48:59.0213445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0213917Z return func(*args, **kwargs) 2025-08-14T21:48:59.0214375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0214827Z return func(*args, **kwargs) 2025-08-14T21:48:59.0215243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0215688Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0216197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:59.0225114Z encoder_outputs = self.encoder( 2025-08-14T21:48:59.0225733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0226344Z return func(*args, **kwargs) 2025-08-14T21:48:59.0226913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0227525Z return func(*args, **kwargs) 2025-08-14T21:48:59.0228050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0228515Z return func(*args, **kwargs) 2025-08-14T21:48:59.0228744Z [Previous line repeated 1 more time] 2025-08-14T21:48:59.0229180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0229630Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0230178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:59.0230676Z layer_outputs = layer_module( 2025-08-14T21:48:59.0231105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:59.0231653Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:59.0232137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0232601Z return func(*args, **kwargs) 2025-08-14T21:48:59.0233052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0233500Z return func(*args, **kwargs) 2025-08-14T21:48:59.0233945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0234403Z return func(*args, **kwargs) 2025-08-14T21:48:59.0234887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:59.0235403Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:59.0235908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:59.0236407Z return forward_fn(*input_tensors) 2025-08-14T21:48:59.0236934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:59.0237567Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:59.0238127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:59.0238707Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:59.0239177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:59.0239602Z return self.act(input) 2025-08-14T21:48:59.0239750Z 2025-08-14T21:48:59.0239850Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0240103Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0240346Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0240594Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0240838Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0241080Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0241406Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0241653Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0241897Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0242144Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0242393Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0242685Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0243153Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0243566Z return mod(**inputs) 2025-08-14T21:48:59.0243990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0244427Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0244930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0245436Z outputs = self.layoutlm( 2025-08-14T21:48:59.0245957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0246467Z return func(*args, **kwargs) 2025-08-14T21:48:59.0246916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0247383Z return func(*args, **kwargs) 2025-08-14T21:48:59.0247854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0248295Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0249092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:59.0249608Z encoder_outputs = self.encoder( 2025-08-14T21:48:59.0250062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0250684Z return func(*args, **kwargs) 2025-08-14T21:48:59.0251125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0251575Z return func(*args, **kwargs) 2025-08-14T21:48:59.0252013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0252474Z return func(*args, **kwargs) 2025-08-14T21:48:59.0252703Z [Previous line repeated 1 more time] 2025-08-14T21:48:59.0253135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0253572Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0254073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:59.0254576Z layer_outputs = layer_module( 2025-08-14T21:48:59.0255068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:59.0255523Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:59.0255992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0256482Z return func(*args, **kwargs) 2025-08-14T21:48:59.0256928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0257391Z return func(*args, **kwargs) 2025-08-14T21:48:59.0257829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0258285Z return func(*args, **kwargs) 2025-08-14T21:48:59.0258766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:59.0259289Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:59.0259790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:59.0266538Z return forward_fn(*input_tensors) 2025-08-14T21:48:59.0267083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:59.0267685Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:59.0268241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:59.0268791Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:59.0269267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:59.0269685Z return self.act(input) 2025-08-14T21:48:59.0269833Z 2025-08-14T21:48:59.0269936Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0270200Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0270451Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0270775Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0271015Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0271267Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0271510Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0271837Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0272081Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0272323Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0272568Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0272838Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0273293Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0273693Z return mod(**inputs) 2025-08-14T21:48:59.0274092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0274537Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0275132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0275655Z outputs = self.layoutlm( 2025-08-14T21:48:59.0276095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0276554Z return func(*args, **kwargs) 2025-08-14T21:48:59.0276999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0277458Z return func(*args, **kwargs) 2025-08-14T21:48:59.0277871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0278320Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0278851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:59.0279395Z encoder_outputs = self.encoder( 2025-08-14T21:48:59.0279876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0280338Z return func(*args, **kwargs) 2025-08-14T21:48:59.0280797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0281338Z return func(*args, **kwargs) 2025-08-14T21:48:59.0281781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0282238Z return func(*args, **kwargs) 2025-08-14T21:48:59.0282468Z [Previous line repeated 1 more time] 2025-08-14T21:48:59.0282903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0283355Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0283859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:59.0284360Z layer_outputs = layer_module( 2025-08-14T21:48:59.0284800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:59.0285254Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:59.0285722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0286182Z return func(*args, **kwargs) 2025-08-14T21:48:59.0286626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0287081Z return func(*args, **kwargs) 2025-08-14T21:48:59.0287518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0287983Z return func(*args, **kwargs) 2025-08-14T21:48:59.0288465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:59.0288978Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:59.0293806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:59.0294315Z return forward_fn(*input_tensors) 2025-08-14T21:48:59.0294856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:59.0295447Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:59.0296017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:59.0296580Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:59.0297066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:59.0297493Z return self.act(input) 2025-08-14T21:48:59.0297645Z 2025-08-14T21:48:59.0297745Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0298010Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0298261Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0298506Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0298753Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0298988Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0299231Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0299474Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0299715Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0299949Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0300225Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0300505Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0300945Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0301376Z return mod(**inputs) 2025-08-14T21:48:59.0301780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0302223Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0302729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0303226Z outputs = self.layoutlm( 2025-08-14T21:48:59.0303747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0304246Z return func(*args, **kwargs) 2025-08-14T21:48:59.0304694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0305166Z return func(*args, **kwargs) 2025-08-14T21:48:59.0305599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0306041Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0306548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:59.0307064Z encoder_outputs = self.encoder( 2025-08-14T21:48:59.0307525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0307988Z return func(*args, **kwargs) 2025-08-14T21:48:59.0308487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0308950Z return func(*args, **kwargs) 2025-08-14T21:48:59.0309391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0309848Z return func(*args, **kwargs) 2025-08-14T21:48:59.0310087Z [Previous line repeated 1 more time] 2025-08-14T21:48:59.0310517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0311008Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0311517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:59.0312018Z layer_outputs = layer_module( 2025-08-14T21:48:59.0312446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:59.0312896Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:59.0313373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0313830Z return func(*args, **kwargs) 2025-08-14T21:48:59.0314274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0314742Z return func(*args, **kwargs) 2025-08-14T21:48:59.0315195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0315643Z return func(*args, **kwargs) 2025-08-14T21:48:59.0316123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:59.0316642Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:59.0317140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:59.0317635Z return forward_fn(*input_tensors) 2025-08-14T21:48:59.0322430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:59.0323042Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:59.0334690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:59.0335305Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:59.0335797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:59.0336240Z return self.act(input) 2025-08-14T21:48:59.0336384Z 2025-08-14T21:48:59.0336502Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0336772Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0337023Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0337275Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0337603Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0337844Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0338087Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0338338Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0338586Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0338830Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0339075Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0339362Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0339822Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0340236Z return mod(**inputs) 2025-08-14T21:48:59.0340656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0341102Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0341671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0342210Z outputs = self.layoutlm( 2025-08-14T21:48:59.0342665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0343146Z return func(*args, **kwargs) 2025-08-14T21:48:59.0343600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0344194Z return func(*args, **kwargs) 2025-08-14T21:48:59.0344620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0345071Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0345587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:59.0346097Z encoder_outputs = self.encoder( 2025-08-14T21:48:59.0346563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0347033Z return func(*args, **kwargs) 2025-08-14T21:48:59.0352007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0352484Z return func(*args, **kwargs) 2025-08-14T21:48:59.0352946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0353418Z return func(*args, **kwargs) 2025-08-14T21:48:59.0353665Z [Previous line repeated 1 more time] 2025-08-14T21:48:59.0354102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0354551Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0355055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:59.0355649Z layer_outputs = layer_module( 2025-08-14T21:48:59.0356087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:59.0356597Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:59.0357079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0357539Z return func(*args, **kwargs) 2025-08-14T21:48:59.0357988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0358445Z return func(*args, **kwargs) 2025-08-14T21:48:59.0358884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0359340Z return func(*args, **kwargs) 2025-08-14T21:48:59.0359833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:59.0360359Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:59.0360862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:59.0361441Z return forward_fn(*input_tensors) 2025-08-14T21:48:59.0362065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:59.0362732Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:59.0363298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:59.0363854Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:59.0364336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:59.0364764Z return self.act(input) 2025-08-14T21:48:59.0364919Z 2025-08-14T21:48:59.0365018Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0365276Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0365570Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0366025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0366441Z return mod(**inputs) 2025-08-14T21:48:59.0366929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0367371Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0367876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:59.0368383Z outputs = self.layoutlm( 2025-08-14T21:48:59.0368831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0369293Z return func(*args, **kwargs) 2025-08-14T21:48:59.0369746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:59.0370212Z return func(*args, **kwargs) 2025-08-14T21:48:59.0370627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0371082Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0371594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 654, in forward 2025-08-14T21:48:59.0372123Z pooled_output = self.pooler(sequence_output) 2025-08-14T21:48:59.0372646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 431, in forward 2025-08-14T21:48:59.0373188Z pooled_output = self.activation(pooled_output) 2025-08-14T21:48:59.0373405Z 2025-08-14T21:48:59.0373518Z cudagraph partition due to non gpu ops 2025-08-14T21:48:59.0373807Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0374259Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0374693Z return mod(**inputs) 2025-08-14T21:48:59.0375106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0375549Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0376054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-08-14T21:48:59.0385088Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:48:59.0385390Z 2025-08-14T21:48:59.0385553Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:59.0386131Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:59.0386666Z return mod(**inputs) 2025-08-14T21:48:59.0387201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:59.0387659Z output = func(self, *args, **kwargs) 2025-08-14T21:48:59.0388163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-08-14T21:48:59.0388736Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:48:59.0388967Z 2025-08-14T21:49:17.0042478Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0042850Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0043142Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0043438Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0043736Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0044014Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0044265Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0044542Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0044787Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0045039Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0045339Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0045887Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0046630Z return mod(**inputs) 2025-08-14T21:49:17.0047085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0047541Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0050674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0051307Z outputs = self.layoutlm( 2025-08-14T21:49:17.0051764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0052249Z return func(*args, **kwargs) 2025-08-14T21:49:17.0052711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0053175Z return func(*args, **kwargs) 2025-08-14T21:49:17.0053590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0054042Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0054549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:49:17.0055051Z encoder_outputs = self.encoder( 2025-08-14T21:49:17.0055539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0056004Z return func(*args, **kwargs) 2025-08-14T21:49:17.0056441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0056984Z return func(*args, **kwargs) 2025-08-14T21:49:17.0057428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0057924Z return func(*args, **kwargs) 2025-08-14T21:49:17.0058159Z [Previous line repeated 1 more time] 2025-08-14T21:49:17.0058603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0059052Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0059548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:49:17.0060058Z layer_outputs = layer_module( 2025-08-14T21:49:17.0064754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:17.0065211Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:17.0065690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0066157Z return func(*args, **kwargs) 2025-08-14T21:49:17.0066605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0067060Z return func(*args, **kwargs) 2025-08-14T21:49:17.0067511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0067969Z return func(*args, **kwargs) 2025-08-14T21:49:17.0068457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:49:17.0068973Z layer_output = apply_chunking_to_forward( 2025-08-14T21:49:17.0069484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:49:17.0069989Z return forward_fn(*input_tensors) 2025-08-14T21:49:17.0070521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:49:17.0071140Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:49:17.0071762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:49:17.0072317Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:49:17.0072791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:17.0073218Z return self.act(input) 2025-08-14T21:49:17.0073451Z 2025-08-14T21:49:17.0073573Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0073837Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0074089Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0074334Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0074586Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0075019Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0075315Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0075559Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0075797Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0076043Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0076280Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0076558Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0077039Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0077449Z return mod(**inputs) 2025-08-14T21:49:17.0077862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0078343Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0078838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0079408Z outputs = self.layoutlm( 2025-08-14T21:49:17.0079869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0080331Z return func(*args, **kwargs) 2025-08-14T21:49:17.0080778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0081309Z return func(*args, **kwargs) 2025-08-14T21:49:17.0081757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0082194Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0082695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:49:17.0083209Z encoder_outputs = self.encoder( 2025-08-14T21:49:17.0083718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0084178Z return func(*args, **kwargs) 2025-08-14T21:49:17.0084625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0085085Z return func(*args, **kwargs) 2025-08-14T21:49:17.0085517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0085976Z return func(*args, **kwargs) 2025-08-14T21:49:17.0086209Z [Previous line repeated 1 more time] 2025-08-14T21:49:17.0086644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0087085Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0087592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:49:17.0088091Z layer_outputs = layer_module( 2025-08-14T21:49:17.0088518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:17.0088970Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:17.0093695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0094163Z return func(*args, **kwargs) 2025-08-14T21:49:17.0094599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0095064Z return func(*args, **kwargs) 2025-08-14T21:49:17.0095557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0096016Z return func(*args, **kwargs) 2025-08-14T21:49:17.0096495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:49:17.0097023Z layer_output = apply_chunking_to_forward( 2025-08-14T21:49:17.0097539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:49:17.0098028Z return forward_fn(*input_tensors) 2025-08-14T21:49:17.0098564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:49:17.0099165Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:49:17.0099722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:49:17.0100278Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:49:17.0100783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:17.0101210Z return self.act(input) 2025-08-14T21:49:17.0101356Z 2025-08-14T21:49:17.0101481Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0101737Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0101985Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0102225Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0102480Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0102723Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0102966Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0103201Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0103443Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0103687Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0104002Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0104335Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0104791Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0105189Z return mod(**inputs) 2025-08-14T21:49:17.0105603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0106047Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0106552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0107046Z outputs = self.layoutlm( 2025-08-14T21:49:17.0107491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0107947Z return func(*args, **kwargs) 2025-08-14T21:49:17.0108390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0108848Z return func(*args, **kwargs) 2025-08-14T21:49:17.0109264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0109701Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0110196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:49:17.0110696Z encoder_outputs = self.encoder( 2025-08-14T21:49:17.0111192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0111646Z return func(*args, **kwargs) 2025-08-14T21:49:17.0112092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0112581Z return func(*args, **kwargs) 2025-08-14T21:49:17.0113025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0113481Z return func(*args, **kwargs) 2025-08-14T21:49:17.0113717Z [Previous line repeated 1 more time] 2025-08-14T21:49:17.0114156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0114596Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0115108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:49:17.0115618Z layer_outputs = layer_module( 2025-08-14T21:49:17.0116055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:17.0116501Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:17.0116976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0117440Z return func(*args, **kwargs) 2025-08-14T21:49:17.0117913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0126785Z return func(*args, **kwargs) 2025-08-14T21:49:17.0127395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0127996Z return func(*args, **kwargs) 2025-08-14T21:49:17.0128635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:49:17.0129163Z layer_output = apply_chunking_to_forward( 2025-08-14T21:49:17.0129672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:49:17.0130173Z return forward_fn(*input_tensors) 2025-08-14T21:49:17.0130708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:49:17.0131313Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:49:17.0131895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:49:17.0132461Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:49:17.0135045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:17.0135475Z return self.act(input) 2025-08-14T21:49:17.0135615Z 2025-08-14T21:49:17.0135723Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0135973Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0136218Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0136465Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0136704Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0136948Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0137195Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0137441Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0137677Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0137919Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0138161Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0138433Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0138915Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0139328Z return mod(**inputs) 2025-08-14T21:49:17.0139727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0140169Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0140701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0141201Z outputs = self.layoutlm( 2025-08-14T21:49:17.0141644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0142109Z return func(*args, **kwargs) 2025-08-14T21:49:17.0142557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0143009Z return func(*args, **kwargs) 2025-08-14T21:49:17.0143431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0143873Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0144385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:49:17.0144887Z encoder_outputs = self.encoder( 2025-08-14T21:49:17.0145354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0145841Z return func(*args, **kwargs) 2025-08-14T21:49:17.0146277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0146733Z return func(*args, **kwargs) 2025-08-14T21:49:17.0147200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0147762Z return func(*args, **kwargs) 2025-08-14T21:49:17.0148010Z [Previous line repeated 1 more time] 2025-08-14T21:49:17.0148449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0149164Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0149665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:49:17.0150171Z layer_outputs = layer_module( 2025-08-14T21:49:17.0150602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:17.0151061Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:17.0151526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0152052Z return func(*args, **kwargs) 2025-08-14T21:49:17.0152593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0153054Z return func(*args, **kwargs) 2025-08-14T21:49:17.0153485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0153943Z return func(*args, **kwargs) 2025-08-14T21:49:17.0154427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:49:17.0154934Z layer_output = apply_chunking_to_forward( 2025-08-14T21:49:17.0155442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:49:17.0155934Z return forward_fn(*input_tensors) 2025-08-14T21:49:17.0156473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:49:17.0157151Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:49:17.0157711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:49:17.0158259Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:49:17.0159689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:17.0160137Z return self.act(input) 2025-08-14T21:49:17.0160284Z 2025-08-14T21:49:17.0160383Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0160644Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0160888Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0161141Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0161497Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0161734Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0166210Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0166459Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0166707Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0166940Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0167183Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0167462Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0167904Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0168311Z return mod(**inputs) 2025-08-14T21:49:17.0168719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0169209Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0169713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0170293Z outputs = self.layoutlm( 2025-08-14T21:49:17.0170744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0171212Z return func(*args, **kwargs) 2025-08-14T21:49:17.0171658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0172122Z return func(*args, **kwargs) 2025-08-14T21:49:17.0172538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0172984Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0173493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:49:17.0174001Z encoder_outputs = self.encoder( 2025-08-14T21:49:17.0174453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0174916Z return func(*args, **kwargs) 2025-08-14T21:49:17.0175364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0175821Z return func(*args, **kwargs) 2025-08-14T21:49:17.0176254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0176831Z return func(*args, **kwargs) 2025-08-14T21:49:17.0177071Z [Previous line repeated 1 more time] 2025-08-14T21:49:17.0177500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0177957Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0178455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:49:17.0178957Z layer_outputs = layer_module( 2025-08-14T21:49:17.0179381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:17.0179860Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:17.0180332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0180785Z return func(*args, **kwargs) 2025-08-14T21:49:17.0181315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0181774Z return func(*args, **kwargs) 2025-08-14T21:49:17.0182213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0182664Z return func(*args, **kwargs) 2025-08-14T21:49:17.0183146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:49:17.0183660Z layer_output = apply_chunking_to_forward( 2025-08-14T21:49:17.0184155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:49:17.0184645Z return forward_fn(*input_tensors) 2025-08-14T21:49:17.0185176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:49:17.0185772Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:49:17.0186324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:49:17.0186898Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:49:17.0187368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:17.0187814Z return self.act(input) 2025-08-14T21:49:17.0187954Z 2025-08-14T21:49:17.0188055Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0188309Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0188556Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0188792Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0189035Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0189281Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0189516Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0189758Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0190003Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0190249Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0190488Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0190772Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0195453Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0195861Z return mod(**inputs) 2025-08-14T21:49:17.0196270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0196718Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0197220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0197722Z outputs = self.layoutlm( 2025-08-14T21:49:17.0198176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0198636Z return func(*args, **kwargs) 2025-08-14T21:49:17.0199078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0199534Z return func(*args, **kwargs) 2025-08-14T21:49:17.0199954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0200406Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0200928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:49:17.0201531Z encoder_outputs = self.encoder( 2025-08-14T21:49:17.0201997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0202451Z return func(*args, **kwargs) 2025-08-14T21:49:17.0202925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0203391Z return func(*args, **kwargs) 2025-08-14T21:49:17.0203840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0204292Z return func(*args, **kwargs) 2025-08-14T21:49:17.0204526Z [Previous line repeated 1 more time] 2025-08-14T21:49:17.0204965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0205469Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0206021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:49:17.0206520Z layer_outputs = layer_module( 2025-08-14T21:49:17.0206951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:17.0207394Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:17.0207868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0208354Z return func(*args, **kwargs) 2025-08-14T21:49:17.0208788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0209269Z return func(*args, **kwargs) 2025-08-14T21:49:17.0209705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0210206Z return func(*args, **kwargs) 2025-08-14T21:49:17.0210678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:49:17.0211194Z layer_output = apply_chunking_to_forward( 2025-08-14T21:49:17.0211699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:49:17.0212187Z return forward_fn(*input_tensors) 2025-08-14T21:49:17.0212716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:49:17.0213313Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:49:17.0213874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:49:17.0214467Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:49:17.0214946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:17.0215373Z return self.act(input) 2025-08-14T21:49:17.0215511Z 2025-08-14T21:49:17.0215616Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0215863Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0216110Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0216356Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0216592Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0216834Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0217077Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0217317Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0217563Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0217810Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0218055Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0218352Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0218810Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0219214Z return mod(**inputs) 2025-08-14T21:49:17.0219616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0224327Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0224836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0225348Z outputs = self.layoutlm( 2025-08-14T21:49:17.0225792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0226272Z return func(*args, **kwargs) 2025-08-14T21:49:17.0226737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0227200Z return func(*args, **kwargs) 2025-08-14T21:49:17.0227624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0228068Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0228574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:49:17.0229067Z encoder_outputs = self.encoder( 2025-08-14T21:49:17.0229525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0230007Z return func(*args, **kwargs) 2025-08-14T21:49:17.0230439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0230934Z return func(*args, **kwargs) 2025-08-14T21:49:17.0231373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0231832Z return func(*args, **kwargs) 2025-08-14T21:49:17.0232057Z [Previous line repeated 1 more time] 2025-08-14T21:49:17.0232493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0232936Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0233437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:49:17.0233940Z layer_outputs = layer_module( 2025-08-14T21:49:17.0234439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:17.0234938Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:17.0235400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0235856Z return func(*args, **kwargs) 2025-08-14T21:49:17.0236303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0236752Z return func(*args, **kwargs) 2025-08-14T21:49:17.0237193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0237655Z return func(*args, **kwargs) 2025-08-14T21:49:17.0238137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:49:17.0238651Z layer_output = apply_chunking_to_forward( 2025-08-14T21:49:17.0239155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:49:17.0239649Z return forward_fn(*input_tensors) 2025-08-14T21:49:17.0240207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:49:17.0240800Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:49:17.0241427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:49:17.0241982Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:49:17.0242479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:17.0242907Z return self.act(input) 2025-08-14T21:49:17.0243056Z 2025-08-14T21:49:17.0243156Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0243416Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0243659Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0243911Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0244157Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0244392Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0244636Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0244881Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0245121Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0245365Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0245613Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0245897Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0246342Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0246767Z return mod(**inputs) 2025-08-14T21:49:17.0247176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0247606Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0248135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0248636Z outputs = self.layoutlm( 2025-08-14T21:49:17.0253580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0254035Z return func(*args, **kwargs) 2025-08-14T21:49:17.0254478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0254949Z return func(*args, **kwargs) 2025-08-14T21:49:17.0255394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0255851Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0256351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:49:17.0256854Z encoder_outputs = self.encoder( 2025-08-14T21:49:17.0257307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0257768Z return func(*args, **kwargs) 2025-08-14T21:49:17.0258212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0258659Z return func(*args, **kwargs) 2025-08-14T21:49:17.0259099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0259554Z return func(*args, **kwargs) 2025-08-14T21:49:17.0259784Z [Previous line repeated 1 more time] 2025-08-14T21:49:17.0260211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0260646Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0261149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:49:17.0261644Z layer_outputs = layer_module( 2025-08-14T21:49:17.0262147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:17.0262600Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:17.0263069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0263601Z return func(*args, **kwargs) 2025-08-14T21:49:17.0264134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0264597Z return func(*args, **kwargs) 2025-08-14T21:49:17.0265032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0265491Z return func(*args, **kwargs) 2025-08-14T21:49:17.0265979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:49:17.0266500Z layer_output = apply_chunking_to_forward( 2025-08-14T21:49:17.0267000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:49:17.0267492Z return forward_fn(*input_tensors) 2025-08-14T21:49:17.0268031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:49:17.0268631Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:49:17.0269187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:49:17.0269769Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:49:17.0270244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:17.0270694Z return self.act(input) 2025-08-14T21:49:17.0270844Z 2025-08-14T21:49:17.0270950Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0271218Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0271475Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0271714Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0271958Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0272205Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0272440Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0272683Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0272926Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0273166Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0273408Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0273687Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0274135Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0274529Z return mod(**inputs) 2025-08-14T21:49:17.0274936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0275375Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0275869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0276365Z outputs = self.layoutlm( 2025-08-14T21:49:17.0276808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0277271Z return func(*args, **kwargs) 2025-08-14T21:49:17.0277708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0286591Z return func(*args, **kwargs) 2025-08-14T21:49:17.0287136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0287705Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0288396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:49:17.0289077Z encoder_outputs = self.encoder( 2025-08-14T21:49:17.0289580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0290059Z return func(*args, **kwargs) 2025-08-14T21:49:17.0290502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0290961Z return func(*args, **kwargs) 2025-08-14T21:49:17.0291397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0291857Z return func(*args, **kwargs) 2025-08-14T21:49:17.0292093Z [Previous line repeated 1 more time] 2025-08-14T21:49:17.0294718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0295152Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0295654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:49:17.0296156Z layer_outputs = layer_module( 2025-08-14T21:49:17.0296579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:17.0297031Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:17.0297540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0298002Z return func(*args, **kwargs) 2025-08-14T21:49:17.0298437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0298916Z return func(*args, **kwargs) 2025-08-14T21:49:17.0299358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0299809Z return func(*args, **kwargs) 2025-08-14T21:49:17.0300291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:49:17.0300814Z layer_output = apply_chunking_to_forward( 2025-08-14T21:49:17.0301326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:49:17.0301816Z return forward_fn(*input_tensors) 2025-08-14T21:49:17.0302353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:49:17.0302953Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:49:17.0303515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:49:17.0304064Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:49:17.0304541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:17.0304966Z return self.act(input) 2025-08-14T21:49:17.0305107Z 2025-08-14T21:49:17.0305206Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0305459Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0305706Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0305952Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0306192Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0306434Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0306683Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0306987Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0307255Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0307520Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0307789Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0308071Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0308518Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0308917Z return mod(**inputs) 2025-08-14T21:49:17.0309347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0309793Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0310294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0310790Z outputs = self.layoutlm( 2025-08-14T21:49:17.0311232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0311749Z return func(*args, **kwargs) 2025-08-14T21:49:17.0312201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0312651Z return func(*args, **kwargs) 2025-08-14T21:49:17.0313068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0313507Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0314006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:49:17.0314512Z encoder_outputs = self.encoder( 2025-08-14T21:49:17.0315010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0315470Z return func(*args, **kwargs) 2025-08-14T21:49:17.0315933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0316389Z return func(*args, **kwargs) 2025-08-14T21:49:17.0316832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0317282Z return func(*args, **kwargs) 2025-08-14T21:49:17.0317521Z [Previous line repeated 1 more time] 2025-08-14T21:49:17.0317966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0318404Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0318899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:49:17.0319406Z layer_outputs = layer_module( 2025-08-14T21:49:17.0319842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:17.0320286Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:17.0320754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0325567Z return func(*args, **kwargs) 2025-08-14T21:49:17.0326028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0326489Z return func(*args, **kwargs) 2025-08-14T21:49:17.0326938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0327407Z return func(*args, **kwargs) 2025-08-14T21:49:17.0327899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:49:17.0328417Z layer_output = apply_chunking_to_forward( 2025-08-14T21:49:17.0328934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:49:17.0329435Z return forward_fn(*input_tensors) 2025-08-14T21:49:17.0329995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:49:17.0330593Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:49:17.0331150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:49:17.0331731Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:49:17.0332196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:17.0332619Z return self.act(input) 2025-08-14T21:49:17.0332756Z 2025-08-14T21:49:17.0332859Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0333112Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0333360Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0333608Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0333847Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0334083Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0334325Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0334569Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0334802Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0335042Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0335286Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0335560Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0336072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0336557Z return mod(**inputs) 2025-08-14T21:49:17.0336969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0337428Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0337934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0338437Z outputs = self.layoutlm( 2025-08-14T21:49:17.0338877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0339345Z return func(*args, **kwargs) 2025-08-14T21:49:17.0339794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0340262Z return func(*args, **kwargs) 2025-08-14T21:49:17.0340723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0341173Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0341677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:49:17.0342172Z encoder_outputs = self.encoder( 2025-08-14T21:49:17.0342638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0343099Z return func(*args, **kwargs) 2025-08-14T21:49:17.0343538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0343990Z return func(*args, **kwargs) 2025-08-14T21:49:17.0344434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0344888Z return func(*args, **kwargs) 2025-08-14T21:49:17.0345121Z [Previous line repeated 1 more time] 2025-08-14T21:49:17.0345566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0346009Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0346508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:49:17.0347025Z layer_outputs = layer_module( 2025-08-14T21:49:17.0347461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:17.0347915Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:17.0348417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0349199Z return func(*args, **kwargs) 2025-08-14T21:49:17.0363808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0364316Z return func(*args, **kwargs) 2025-08-14T21:49:17.0364780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0365487Z return func(*args, **kwargs) 2025-08-14T21:49:17.0365991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:49:17.0366521Z layer_output = apply_chunking_to_forward( 2025-08-14T21:49:17.0367037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:49:17.0367545Z return forward_fn(*input_tensors) 2025-08-14T21:49:17.0368099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:49:17.0368697Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:49:17.0369385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:49:17.0369986Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:49:17.0370517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:17.0370946Z return self.act(input) 2025-08-14T21:49:17.0371106Z 2025-08-14T21:49:17.0371214Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0371479Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0371728Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0371978Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0372226Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0372470Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0372720Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0372974Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0373231Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0373475Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0373788Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0374082Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0374533Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0374948Z return mod(**inputs) 2025-08-14T21:49:17.0375367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0375810Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0376322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0376829Z outputs = self.layoutlm( 2025-08-14T21:49:17.0377279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0377738Z return func(*args, **kwargs) 2025-08-14T21:49:17.0378193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0378656Z return func(*args, **kwargs) 2025-08-14T21:49:17.0379082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0383809Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0384329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:49:17.0384842Z encoder_outputs = self.encoder( 2025-08-14T21:49:17.0385344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0385810Z return func(*args, **kwargs) 2025-08-14T21:49:17.0386256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0386727Z return func(*args, **kwargs) 2025-08-14T21:49:17.0387161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0387620Z return func(*args, **kwargs) 2025-08-14T21:49:17.0387860Z [Previous line repeated 1 more time] 2025-08-14T21:49:17.0388292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0388740Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0389248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:49:17.0389755Z layer_outputs = layer_module( 2025-08-14T21:49:17.0390183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:17.0390660Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:17.0391134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0391617Z return func(*args, **kwargs) 2025-08-14T21:49:17.0392058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0392521Z return func(*args, **kwargs) 2025-08-14T21:49:17.0392966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0393446Z return func(*args, **kwargs) 2025-08-14T21:49:17.0394012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:49:17.0394597Z layer_output = apply_chunking_to_forward( 2025-08-14T21:49:17.0395110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:49:17.0395609Z return forward_fn(*input_tensors) 2025-08-14T21:49:17.0396153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:49:17.0396761Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:49:17.0397331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:49:17.0397880Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:49:17.0398361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:17.0398795Z return self.act(input) 2025-08-14T21:49:17.0398940Z 2025-08-14T21:49:17.0399044Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0399306Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0399600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0400056Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0400459Z return mod(**inputs) 2025-08-14T21:49:17.0400873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0401417Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0401945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:49:17.0402456Z outputs = self.layoutlm( 2025-08-14T21:49:17.0402912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0403416Z return func(*args, **kwargs) 2025-08-14T21:49:17.0403860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:49:17.0404321Z return func(*args, **kwargs) 2025-08-14T21:49:17.0404743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0405180Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0405685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 654, in forward 2025-08-14T21:49:17.0406214Z pooled_output = self.pooler(sequence_output) 2025-08-14T21:49:17.0406746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 431, in forward 2025-08-14T21:49:17.0407270Z pooled_output = self.activation(pooled_output) 2025-08-14T21:49:17.0407474Z 2025-08-14T21:49:17.0407574Z cudagraph partition due to non gpu ops 2025-08-14T21:49:17.0407868Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0416700Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0417253Z return mod(**inputs) 2025-08-14T21:49:17.0417780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0418389Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0419055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-08-14T21:49:17.0419819Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:49:17.0420058Z 2025-08-14T21:49:17.0420188Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:17.0420633Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:17.0421036Z return mod(**inputs) 2025-08-14T21:49:17.0421444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:49:17.0421894Z output = func(self, *args, **kwargs) 2025-08-14T21:49:17.0422387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-08-14T21:49:17.0423031Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:49:17.0423321Z 2025-08-14T21:49:20.3857352Z Compilation time (from dynamo_timed): 34.733052466 2025-08-14T21:49:20.3858716Z pass 2025-08-14T21:49:20.3868926Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:49:20.3870305Z TIMING: _recursive_pre_grad_passes:0.11467 _recursive_joint_graph_passes:1.12899 _recursive_post_grad_passes:0.19383 async_compile.wait:0.8772 code_gen:7.37375 inductor_compile:12.9174 backend_compile:26.43743 gc:0.0031 entire_frame_compile:34.73305 total_wall_time:34.73305 2025-08-14T21:49:20.3871920Z STATS: call_* op count: 860 | FakeTensorMode.__torch_dispatch__:53425 | FakeTensor.__torch_dispatch__:7669 | ProxyTorchDispatchMode.__torch_dispatch__:13107 2025-08-14T21:49:20.3872577Z Dynamo produced 2 graphs covering 860 ops with 0 graph breaks (0 unique) 2025-08-14T21:49:26.9729020Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:49:26.9730143Z from pkg_resources import resource_filename 2025-08-14T21:49:27.7069856Z 2025-08-14T21:49:38.5818385Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:49:38.5818745Z loading model: 0it [00:10, ?it/s] 2025-08-14T21:49:38.5862106Z cpu eval M2M100ForConditionalGeneration 2025-08-14T21:49:40.3841121Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:49:41.4180408Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:49:42.4836868Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:50:13.4631751Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4641002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4641600Z return mod(**inputs) 2025-08-14T21:50:13.4642264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4642932Z outputs = self.model( 2025-08-14T21:50:13.4643711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4644629Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4645525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 844, in forward 2025-08-14T21:50:13.4646543Z embed_pos = self.embed_positions(input_ids, inputs_embeds) 2025-08-14T21:50:13.4647144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context 2025-08-14T21:50:13.4649077Z return func(*args, **kwargs) 2025-08-14T21:50:13.4649568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 148, in forward 2025-08-14T21:50:13.4650232Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length).to( 2025-08-14T21:50:13.4650969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 80, in create_position_ids_from_input_ids 2025-08-14T21:50:13.4651547Z mask = input_ids.ne(padding_idx).int() 2025-08-14T21:50:13.4651736Z 2025-08-14T21:50:13.4651845Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4652149Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4652404Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4652652Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4652890Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4653134Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4653373Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4653612Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4653855Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4654110Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4654348Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4654587Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4654875Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4655335Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4656034Z return mod(**inputs) 2025-08-14T21:50:13.4656745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4657586Z outputs = self.model( 2025-08-14T21:50:13.4658438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4659316Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4659991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 844, in forward 2025-08-14T21:50:13.4660537Z embed_pos = self.embed_positions(input_ids, inputs_embeds) 2025-08-14T21:50:13.4661037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context 2025-08-14T21:50:13.4661602Z return func(*args, **kwargs) 2025-08-14T21:50:13.4662182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 148, in forward 2025-08-14T21:50:13.4662843Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length).to( 2025-08-14T21:50:13.4663583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 81, in create_position_ids_from_input_ids 2025-08-14T21:50:13.4664292Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T21:50:13.4664603Z 2025-08-14T21:50:13.4664748Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4665189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4665592Z return mod(**inputs) 2025-08-14T21:50:13.4666110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4666595Z outputs = self.model( 2025-08-14T21:50:13.4667040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4667577Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4668053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 844, in forward 2025-08-14T21:50:13.4668623Z embed_pos = self.embed_positions(input_ids, inputs_embeds) 2025-08-14T21:50:13.4669120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context 2025-08-14T21:50:13.4669570Z return func(*args, **kwargs) 2025-08-14T21:50:13.4670035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 148, in forward 2025-08-14T21:50:13.4670685Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length).to( 2025-08-14T21:50:13.4671422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 81, in create_position_ids_from_input_ids 2025-08-14T21:50:13.4672118Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T21:50:13.4672427Z 2025-08-14T21:50:13.4672539Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4672793Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4673050Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4673299Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4673538Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4673784Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4674030Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4674311Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4674752Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4675156Z return mod(**inputs) 2025-08-14T21:50:13.4675683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4682387Z outputs = self.model( 2025-08-14T21:50:13.4682871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4683758Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4684737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4685589Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4686043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4686504Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4687020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4687527Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4688097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4688611Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4689172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.4689781Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.4690027Z 2025-08-14T21:50:13.4690160Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4690703Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4691137Z return mod(**inputs) 2025-08-14T21:50:13.4691600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4692111Z outputs = self.model( 2025-08-14T21:50:13.4692566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4693051Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4693551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4694034Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4694463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4694965Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4695458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4695962Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4696458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4696976Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4697531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.4698103Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.4698311Z 2025-08-14T21:50:13.4698413Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4698669Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4698950Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4699441Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4699844Z return mod(**inputs) 2025-08-14T21:50:13.4700297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4700774Z outputs = self.model( 2025-08-14T21:50:13.4701226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4701706Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4702183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4702654Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4703119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4703575Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4704061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:50:13.4704679Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.4709135Z 2025-08-14T21:50:13.4709236Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4709502Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4709746Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4710000Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4710244Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4710497Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4710734Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4710986Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4711268Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4711707Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4712110Z return mod(**inputs) 2025-08-14T21:50:13.4712572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4713050Z outputs = self.model( 2025-08-14T21:50:13.4713508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4714113Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4714589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4715092Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4715524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4715980Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4716468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4716969Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4717474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4717990Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4718541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.4719150Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.4719477Z 2025-08-14T21:50:13.4719610Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4720123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4720522Z return mod(**inputs) 2025-08-14T21:50:13.4720983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4721533Z outputs = self.model( 2025-08-14T21:50:13.4721991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4722470Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4722949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4723428Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4723858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4724307Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4724820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4725324Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4725840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4726346Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4726896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.4727468Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.4727673Z 2025-08-14T21:50:13.4727774Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4728033Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4728350Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4729064Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4729808Z return mod(**inputs) 2025-08-14T21:50:13.4730320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4730807Z outputs = self.model( 2025-08-14T21:50:13.4731273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4731829Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4732362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4732842Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4733309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4738022Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4738574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:50:13.4739112Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.4739340Z 2025-08-14T21:50:13.4739442Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4739713Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4739963Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4740211Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4740462Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4740708Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4740950Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4741445Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4741727Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4742187Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4742589Z return mod(**inputs) 2025-08-14T21:50:13.4743055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4743537Z outputs = self.model( 2025-08-14T21:50:13.4743994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4744471Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4744949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4745434Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4745871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4746319Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4746847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4747364Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4747859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4748474Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4749371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.4749984Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.4750218Z 2025-08-14T21:50:13.4750348Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4750797Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4751203Z return mod(**inputs) 2025-08-14T21:50:13.4751656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4752131Z outputs = self.model( 2025-08-14T21:50:13.4752590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4753070Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4753543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4754089Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4754520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4754975Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4755642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4756318Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4756822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4757330Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4757888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.4758464Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.4758675Z 2025-08-14T21:50:13.4758783Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4759038Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4759325Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4759830Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4760233Z return mod(**inputs) 2025-08-14T21:50:13.4760686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4761176Z outputs = self.model( 2025-08-14T21:50:13.4761720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4762205Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4762682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4767393Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4767840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4768293Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4768786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:50:13.4769399Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.4769623Z 2025-08-14T21:50:13.4769725Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4769987Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4770236Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4770486Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4770767Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4771014Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4771261Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4771496Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4771773Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4772216Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4772620Z return mod(**inputs) 2025-08-14T21:50:13.4773077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4773553Z outputs = self.model( 2025-08-14T21:50:13.4774005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4774479Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4774956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4775437Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4775867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4776335Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4776820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4777423Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4777978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4778495Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4779053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.4779662Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.4779895Z 2025-08-14T21:50:13.4780030Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4780483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4780887Z return mod(**inputs) 2025-08-14T21:50:13.4781338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4781818Z outputs = self.model( 2025-08-14T21:50:13.4782273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4782753Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4783216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4783695Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4784130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4784581Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4785054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4785566Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4786064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4786590Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4787144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.4787715Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.4787921Z 2025-08-14T21:50:13.4788053Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4788302Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4788589Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4789033Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4789430Z return mod(**inputs) 2025-08-14T21:50:13.4789878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4790361Z outputs = self.model( 2025-08-14T21:50:13.4790814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4791294Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4800128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4800772Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4801414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4802033Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4802680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:50:13.4803434Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.4803721Z 2025-08-14T21:50:13.4803825Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4804097Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4804348Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4804590Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4804830Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4805073Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4805317Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4805557Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4805836Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4808426Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4808826Z return mod(**inputs) 2025-08-14T21:50:13.4809281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4809764Z outputs = self.model( 2025-08-14T21:50:13.4810221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4810700Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4811176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4811654Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4812082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4812536Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4813028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4813535Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4814037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4814543Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4815125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.4815731Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.4815962Z 2025-08-14T21:50:13.4816096Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4816582Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4816988Z return mod(**inputs) 2025-08-14T21:50:13.4817441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4817919Z outputs = self.model( 2025-08-14T21:50:13.4818371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4818858Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4819326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4819805Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4820239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4820688Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4821281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4821813Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4822311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4822859Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4823415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.4823982Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.4824190Z 2025-08-14T21:50:13.4824295Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4824542Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4824827Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4825276Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4825731Z return mod(**inputs) 2025-08-14T21:50:13.4826188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4826667Z outputs = self.model( 2025-08-14T21:50:13.4827119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4827596Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4828070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4828548Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4828976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4829425Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4829916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:50:13.4830453Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.4830672Z 2025-08-14T21:50:13.4830770Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4831026Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4831278Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4831527Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4831772Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4832042Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4832295Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4832533Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4832822Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4833295Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4833691Z return mod(**inputs) 2025-08-14T21:50:13.4834146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4834630Z outputs = self.model( 2025-08-14T21:50:13.4835081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4839817Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4840299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4840787Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4841280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4841761Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4842251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4842753Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4843292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4843810Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4844404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.4845004Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.4845236Z 2025-08-14T21:50:13.4845365Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4845812Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4846210Z return mod(**inputs) 2025-08-14T21:50:13.4846657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4847142Z outputs = self.model( 2025-08-14T21:50:13.4847594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4848075Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4848539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4849359Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4849863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4850358Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4850848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4851351Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4851849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4852351Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4852905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.4853480Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.4853683Z 2025-08-14T21:50:13.4853791Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4854105Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4854439Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4854885Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4855283Z return mod(**inputs) 2025-08-14T21:50:13.4855771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4856256Z outputs = self.model( 2025-08-14T21:50:13.4856714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4857190Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4857667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4858146Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4858630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4859088Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4859583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:50:13.4860121Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.4860340Z 2025-08-14T21:50:13.4860441Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4860730Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4860978Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4861215Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4861464Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4861743Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4861989Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4862229Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4862512Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4862959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4863357Z return mod(**inputs) 2025-08-14T21:50:13.4863813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4868469Z outputs = self.model( 2025-08-14T21:50:13.4868933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4869415Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4869891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4870376Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4870806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4871256Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4871746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4872249Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4872744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4873255Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4873817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.4874413Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.4874646Z 2025-08-14T21:50:13.4874773Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4875239Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4875639Z return mod(**inputs) 2025-08-14T21:50:13.4876081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4876560Z outputs = self.model( 2025-08-14T21:50:13.4877032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4877515Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4877986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4878466Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4878977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4879475Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4879970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4880478Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4880980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4881563Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4882117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.4882714Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.4882922Z 2025-08-14T21:50:13.4883030Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4883310Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4883603Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4884054Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4884448Z return mod(**inputs) 2025-08-14T21:50:13.4884910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4885394Z outputs = self.model( 2025-08-14T21:50:13.4885850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4886324Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4886804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4887282Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4887714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4888170Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4888657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:50:13.4889193Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.4889412Z 2025-08-14T21:50:13.4889512Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4889764Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4890016Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4890254Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4890506Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4890748Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4890993Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4891227Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4891508Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4891956Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4892372Z return mod(**inputs) 2025-08-14T21:50:13.4892826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4897544Z outputs = self.model( 2025-08-14T21:50:13.4898070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4898559Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4899036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4899517Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4899942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4900390Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4900874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4901375Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4901864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4902378Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4902932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.4903548Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.4903788Z 2025-08-14T21:50:13.4903919Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4904365Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4904795Z return mod(**inputs) 2025-08-14T21:50:13.4905238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4905720Z outputs = self.model( 2025-08-14T21:50:13.4906168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4906650Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4907116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4907600Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4908111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4908610Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4909104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4909611Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4910115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4910617Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4911173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.4911745Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.4911952Z 2025-08-14T21:50:13.4912064Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4912316Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4912602Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4913051Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4913445Z return mod(**inputs) 2025-08-14T21:50:13.4913920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4914402Z outputs = self.model( 2025-08-14T21:50:13.4914852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4915328Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4915835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4916324Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4934357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4934979Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4935525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:50:13.4936078Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.4936321Z 2025-08-14T21:50:13.4936429Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4936698Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4937096Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4937414Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4937665Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4937922Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4938158Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4938405Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4938795Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4939259Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4939718Z return mod(**inputs) 2025-08-14T21:50:13.4940190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4940684Z outputs = self.model( 2025-08-14T21:50:13.4941143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4941636Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4942118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4942596Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4943042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4943508Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4944003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4944508Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4945023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4945540Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4946098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.4946706Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.4946951Z 2025-08-14T21:50:13.4947085Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4947538Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4947940Z return mod(**inputs) 2025-08-14T21:50:13.4948396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4949270Z outputs = self.model( 2025-08-14T21:50:13.4949794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4950277Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4950752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4951238Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4959976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4960577Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4961313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4961994Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4962666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4963290Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4963853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.4964429Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.4964638Z 2025-08-14T21:50:13.4964744Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4965014Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4965311Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4968056Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4968470Z return mod(**inputs) 2025-08-14T21:50:13.4968933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4969455Z outputs = self.model( 2025-08-14T21:50:13.4969906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4970397Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4970880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4971353Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4971794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4972250Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4972744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:50:13.4973273Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.4973506Z 2025-08-14T21:50:13.4973606Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4973863Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4974111Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4974360Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4974602Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4974851Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4975088Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4975337Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4975619Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4976061Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4976468Z return mod(**inputs) 2025-08-14T21:50:13.4976930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4977403Z outputs = self.model( 2025-08-14T21:50:13.4977854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4978363Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4978848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4979325Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4979796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4980328Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4980875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4981378Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4981887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4982403Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4982960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.4983566Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.4983810Z 2025-08-14T21:50:13.4983942Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4984396Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4984792Z return mod(**inputs) 2025-08-14T21:50:13.4985287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4985795Z outputs = self.model( 2025-08-14T21:50:13.4986244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4986747Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.4987222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.4987699Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.4988126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.4988574Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.4989078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.4989602Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.4990095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.4990602Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.4991165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.4991727Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.4991935Z 2025-08-14T21:50:13.4992035Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4992283Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.4992563Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.4993000Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.4993407Z return mod(**inputs) 2025-08-14T21:50:13.4993864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.4994340Z outputs = self.model( 2025-08-14T21:50:13.4999032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.4999522Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.5000023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.5000500Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.5000936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5001451Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5001983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:50:13.5002530Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5002762Z 2025-08-14T21:50:13.5002861Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5003114Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5003357Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5003609Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5003851Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5004092Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5004340Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5004586Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5004864Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5005322Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5005739Z return mod(**inputs) 2025-08-14T21:50:13.5006193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5006686Z outputs = self.model( 2025-08-14T21:50:13.5007135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.5007646Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.5008112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.5008591Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.5009020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5009548Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5010076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.5010577Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.5011075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5011585Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5012133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5012734Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5012970Z 2025-08-14T21:50:13.5013105Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5013538Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5013937Z return mod(**inputs) 2025-08-14T21:50:13.5014386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5014860Z outputs = self.model( 2025-08-14T21:50:13.5015301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.5015788Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.5016257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.5016740Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.5017186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5017633Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5018114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.5018604Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.5019118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5019640Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5020182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5020752Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5020961Z 2025-08-14T21:50:13.5021058Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5021308Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5021582Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5022025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5022419Z return mod(**inputs) 2025-08-14T21:50:13.5022867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5023343Z outputs = self.model( 2025-08-14T21:50:13.5028021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.5028535Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.5029001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.5029505Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.5029942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5030422Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5030910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:50:13.5031448Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5031666Z 2025-08-14T21:50:13.5031770Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5032014Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5032259Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5032506Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5032738Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5032980Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5033216Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5033451Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5033722Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5034168Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5034568Z return mod(**inputs) 2025-08-14T21:50:13.5035019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5035498Z outputs = self.model( 2025-08-14T21:50:13.5035955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.5036439Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.5036906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.5037385Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.5037812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5038350Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5038896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.5039409Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.5039935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5040440Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5040996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5041686Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5041921Z 2025-08-14T21:50:13.5042057Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5042497Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5042901Z return mod(**inputs) 2025-08-14T21:50:13.5043361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5043834Z outputs = self.model( 2025-08-14T21:50:13.5044292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.5044785Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.5045267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.5045763Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.5046195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5046668Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5047146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:50:13.5047643Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:13.5048140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5048646Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5049564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5050139Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5050356Z 2025-08-14T21:50:13.5050458Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5050720Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5050997Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5051448Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5051852Z return mod(**inputs) 2025-08-14T21:50:13.5052293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5056935Z outputs = self.model( 2025-08-14T21:50:13.5057444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:50:13.5057931Z encoder_outputs = self.encoder( 2025-08-14T21:50:13.5058397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:50:13.5058888Z layer_outputs = encoder_layer( 2025-08-14T21:50:13.5059320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5059774Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5060337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:50:13.5060874Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5061092Z 2025-08-14T21:50:13.5061193Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5061442Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5061726Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5061973Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5062214Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5062466Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5062710Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5062954Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5063229Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5063674Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5064073Z return mod(**inputs) 2025-08-14T21:50:13.5064519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5064994Z outputs = self.model( 2025-08-14T21:50:13.5065443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5065928Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5066397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5066911Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5067413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5067944Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5068435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5068950Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5069456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5069955Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5070512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5071108Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5071346Z 2025-08-14T21:50:13.5071480Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5071912Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5072311Z return mod(**inputs) 2025-08-14T21:50:13.5072762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5073230Z outputs = self.model( 2025-08-14T21:50:13.5073679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5074154Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5074628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5075097Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5075531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5075979Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5076455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5076964Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5077492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5077994Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5078536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5079122Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5079329Z 2025-08-14T21:50:13.5079428Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5079681Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5079923Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5080170Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5080417Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5080656Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5080895Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5081144Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5081509Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5090412Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5090941Z return mod(**inputs) 2025-08-14T21:50:13.5091536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5092169Z outputs = self.model( 2025-08-14T21:50:13.5092759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5093434Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5093934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5094435Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5094869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5095315Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5095797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5096386Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5096967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5097472Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5098020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5098623Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5098854Z 2025-08-14T21:50:13.5098987Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5099421Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5099823Z return mod(**inputs) 2025-08-14T21:50:13.5100272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5100751Z outputs = self.model( 2025-08-14T21:50:13.5101194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5101673Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5102143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5102622Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5103044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5103513Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5104001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5104515Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5105054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5105559Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5106112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5106676Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5106887Z 2025-08-14T21:50:13.5106986Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5107243Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5107517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5107957Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5108356Z return mod(**inputs) 2025-08-14T21:50:13.5108803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5109270Z outputs = self.model( 2025-08-14T21:50:13.5109721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5110226Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5110690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5111318Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5111753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5112207Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5112694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:50:13.5113236Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5113465Z 2025-08-14T21:50:13.5113562Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5113821Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5114063Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5114308Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5114555Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5114791Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5115043Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5115286Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5115613Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5116061Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5116465Z return mod(**inputs) 2025-08-14T21:50:13.5116921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5117393Z outputs = self.model( 2025-08-14T21:50:13.5117849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5118329Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5118791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5119277Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5119708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5120160Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5120666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5121174Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5121758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5122311Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5122861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5123462Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5123692Z 2025-08-14T21:50:13.5123831Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5124267Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5124670Z return mod(**inputs) 2025-08-14T21:50:13.5125119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5131905Z outputs = self.model( 2025-08-14T21:50:13.5132352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5132836Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5133318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5133829Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5134260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5134746Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5135230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5135737Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5136243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5136749Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5137301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5137858Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5138067Z 2025-08-14T21:50:13.5138165Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5138417Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5138661Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5138912Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5139151Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5139389Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5139623Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5139932Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5140240Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5140694Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5141099Z return mod(**inputs) 2025-08-14T21:50:13.5141555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5142026Z outputs = self.model( 2025-08-14T21:50:13.5142475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5142954Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5143424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5143922Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5144374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5144841Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5145342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5145862Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5146381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5146890Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5147436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5148034Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5148273Z 2025-08-14T21:50:13.5148407Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5149268Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5149668Z return mod(**inputs) 2025-08-14T21:50:13.5150121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5150597Z outputs = self.model( 2025-08-14T21:50:13.5151036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5151571Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5152044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5152558Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5152985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5153437Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5153922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5158610Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5159128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5159634Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5160190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5160759Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5160964Z 2025-08-14T21:50:13.5161063Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5161401Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5161690Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5162124Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5162524Z return mod(**inputs) 2025-08-14T21:50:13.5162976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5163445Z outputs = self.model( 2025-08-14T21:50:13.5163891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5164378Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5164857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5165331Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5165803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5166257Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5166742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:50:13.5167272Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5167527Z 2025-08-14T21:50:13.5167627Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5167876Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5168119Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5168363Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5168602Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5168911Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5169158Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5169463Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5169739Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5170178Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5170581Z return mod(**inputs) 2025-08-14T21:50:13.5171042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5171515Z outputs = self.model( 2025-08-14T21:50:13.5171968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5172478Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5172950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5173451Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5173877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5174325Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5174809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5175318Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5175828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5176331Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5176876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5177473Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5177709Z 2025-08-14T21:50:13.5177839Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5178283Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5178675Z return mod(**inputs) 2025-08-14T21:50:13.5179124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5179595Z outputs = self.model( 2025-08-14T21:50:13.5180039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5180516Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5180987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5181469Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5181891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5182343Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5182846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5187598Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5188159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5188696Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5189254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5189820Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5190029Z 2025-08-14T21:50:13.5190127Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5190383Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5190631Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5190867Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5191112Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5191359Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5191596Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5191837Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5192118Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5192558Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5192957Z return mod(**inputs) 2025-08-14T21:50:13.5193407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5193910Z outputs = self.model( 2025-08-14T21:50:13.5194354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5194857Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5195330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5195810Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5196233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5196680Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5197170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5197685Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5198326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5198836Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5199393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5199985Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5200229Z 2025-08-14T21:50:13.5200358Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5200802Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5201278Z return mod(**inputs) 2025-08-14T21:50:13.5201744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5202224Z outputs = self.model( 2025-08-14T21:50:13.5202683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5203155Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5203639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5204119Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5204573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5205019Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5205508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5206048Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5206562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5207073Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5207623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5208189Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5208394Z 2025-08-14T21:50:13.5208495Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5208747Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5209033Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5209289Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5209379Z return mod(**inputs) 2025-08-14T21:50:13.5209707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5209826Z outputs = self.model( 2025-08-14T21:50:13.5210158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5210253Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5210598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5210697Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5210980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5211089Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5211408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:50:13.5211560Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5211573Z 2025-08-14T21:50:13.5211678Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5211775Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5211877Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5211969Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5212066Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5212164Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5216503Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5216602Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5216738Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5217041Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5217122Z return mod(**inputs) 2025-08-14T21:50:13.5217455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5217542Z outputs = self.model( 2025-08-14T21:50:13.5217868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5217961Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5218279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5218379Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5218689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5218790Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5219124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5219249Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5219605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5219728Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5220096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5220269Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5220282Z 2025-08-14T21:50:13.5220409Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5220668Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5220750Z return mod(**inputs) 2025-08-14T21:50:13.5221073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5221165Z outputs = self.model( 2025-08-14T21:50:13.5221486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5221605Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5221923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5222035Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5222325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5222427Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5222744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5222870Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5223189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5223315Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5223679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5223815Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5223830Z 2025-08-14T21:50:13.5223935Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5224031Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5224125Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5224224Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5224315Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5224412Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5224502Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5224592Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5224723Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5224977Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5225056Z return mod(**inputs) 2025-08-14T21:50:13.5225391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5225474Z outputs = self.model( 2025-08-14T21:50:13.5225801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5225891Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5226231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5226327Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5226606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5226725Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5227120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5227307Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5227635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5227754Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5228120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5228288Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5228301Z 2025-08-14T21:50:13.5228429Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5228687Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5228771Z return mod(**inputs) 2025-08-14T21:50:13.5229091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5229200Z outputs = self.model( 2025-08-14T21:50:13.5229519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5229629Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5229959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5230048Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5230339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5230437Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5230757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5230898Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5231218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5231345Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5231719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5231852Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5231865Z 2025-08-14T21:50:13.5231968Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5232058Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5232183Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5232443Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5232526Z return mod(**inputs) 2025-08-14T21:50:13.5232856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5232941Z outputs = self.model( 2025-08-14T21:50:13.5233260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5233359Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5233697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5233787Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5234077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5234177Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5234529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:50:13.5234683Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5234697Z 2025-08-14T21:50:13.5234790Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5234889Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5234981Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5235081Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5235174Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5235270Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5235372Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5235464Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5235589Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5235843Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5235922Z return mod(**inputs) 2025-08-14T21:50:13.5236244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5236356Z outputs = self.model( 2025-08-14T21:50:13.5236681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5236778Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5237118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5237208Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5237492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5237589Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5237908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5238038Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5238356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5238485Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5238853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5239016Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5239028Z 2025-08-14T21:50:13.5239159Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5239406Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5239496Z return mod(**inputs) 2025-08-14T21:50:13.5239823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5239905Z outputs = self.model( 2025-08-14T21:50:13.5240238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5240332Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5240654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5240744Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5241046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5241151Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5250212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5250354Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5250830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5250951Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5251327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5251461Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5251473Z 2025-08-14T21:50:13.5251570Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5251671Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5251764Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5251855Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5251953Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5252041Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5252139Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5252232Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5252357Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5252612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5252727Z return mod(**inputs) 2025-08-14T21:50:13.5253048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5253165Z outputs = self.model( 2025-08-14T21:50:13.5253487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5253583Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5253909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5253995Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5254284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5254381Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5254700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5254838Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5255159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5255285Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5255654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5257892Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5257906Z 2025-08-14T21:50:13.5258040Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5258292Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5258381Z return mod(**inputs) 2025-08-14T21:50:13.5258707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5258793Z outputs = self.model( 2025-08-14T21:50:13.5259123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5259216Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5259576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5259671Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5259957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5260086Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5260405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5260541Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5260869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5260986Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5261358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5261490Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5261503Z 2025-08-14T21:50:13.5261602Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5261704Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5261832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5262083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5262172Z return mod(**inputs) 2025-08-14T21:50:13.5262517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5262608Z outputs = self.model( 2025-08-14T21:50:13.5262952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5263043Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5263376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5263467Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5263747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5263854Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5264174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:50:13.5264330Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5264343Z 2025-08-14T21:50:13.5264441Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5264536Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5264634Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5264725Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5264815Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5264917Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5265006Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5265100Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5265224Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5265476Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5265561Z return mod(**inputs) 2025-08-14T21:50:13.5265883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5265967Z outputs = self.model( 2025-08-14T21:50:13.5266296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5266389Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5266736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5266825Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5267105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5267209Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5267560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5267686Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5268011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5268128Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5268510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5268670Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5268683Z 2025-08-14T21:50:13.5268807Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5269064Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5269142Z return mod(**inputs) 2025-08-14T21:50:13.5269470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5269554Z outputs = self.model( 2025-08-14T21:50:13.5269895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5269990Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5270410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5270509Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5270842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5270941Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5271266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5271387Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5271706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5271830Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5272195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5272333Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5272345Z 2025-08-14T21:50:13.5272441Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5272533Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5272629Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5272719Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5272808Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5272903Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5272995Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5273091Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5273216Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5273466Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5273552Z return mod(**inputs) 2025-08-14T21:50:13.5273874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5273958Z outputs = self.model( 2025-08-14T21:50:13.5274308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5274398Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5274725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5274845Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5275165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5275273Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5275593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5275731Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5276058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5276180Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5276556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5276718Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5276731Z 2025-08-14T21:50:13.5276860Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5277116Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5277218Z return mod(**inputs) 2025-08-14T21:50:13.5277545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5277652Z outputs = self.model( 2025-08-14T21:50:13.5277974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5278076Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5278397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5278487Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5278774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5278873Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5279203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5279332Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5279652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5279774Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5280144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5280286Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5280298Z 2025-08-14T21:50:13.5280393Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5280486Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5280617Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5280868Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5280952Z return mod(**inputs) 2025-08-14T21:50:13.5281353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5281441Z outputs = self.model( 2025-08-14T21:50:13.5281773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5281899Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5282217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5282316Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5282615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5282715Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5283047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:50:13.5283202Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5283216Z 2025-08-14T21:50:13.5283323Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5283417Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5283511Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5283617Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5283712Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5283807Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5283910Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5284007Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5284143Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5284394Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5284474Z return mod(**inputs) 2025-08-14T21:50:13.5289081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5289166Z outputs = self.model( 2025-08-14T21:50:13.5289512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5289613Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5289932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5290028Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5290306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5290406Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5290732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5290854Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5291182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5291301Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5291667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5291837Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5291850Z 2025-08-14T21:50:13.5291974Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5292227Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5292314Z return mod(**inputs) 2025-08-14T21:50:13.5292634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5292727Z outputs = self.model( 2025-08-14T21:50:13.5293045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5293139Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5293487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5293576Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5293864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5293964Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5294304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5294429Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5294747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5294865Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5295242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5295373Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5295386Z 2025-08-14T21:50:13.5295485Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5295578Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5295668Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5295764Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5295855Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5295943Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5296038Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5296148Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5296279Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5296530Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5296632Z return mod(**inputs) 2025-08-14T21:50:13.5296961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5297045Z outputs = self.model( 2025-08-14T21:50:13.5297363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5297462Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5297782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5297873Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5307425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5307576Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5307933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5308116Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5308488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5308616Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5309004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5309176Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5309190Z 2025-08-14T21:50:13.5309336Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5309603Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5309692Z return mod(**inputs) 2025-08-14T21:50:13.5310033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5310124Z outputs = self.model( 2025-08-14T21:50:13.5310521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5310629Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5310954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5311054Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5311366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5311474Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5311806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5311940Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5312261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5312396Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5312767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5312910Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5312923Z 2025-08-14T21:50:13.5313027Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5313121Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5313257Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5313546Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5313636Z return mod(**inputs) 2025-08-14T21:50:13.5318270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5318392Z outputs = self.model( 2025-08-14T21:50:13.5318728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5318823Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5319145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5319244Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5319532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5319643Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5319964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:50:13.5320117Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5320131Z 2025-08-14T21:50:13.5320247Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5320342Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5320437Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5320544Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5320638Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5320741Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5320835Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5320929Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5321068Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5321426Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5321513Z return mod(**inputs) 2025-08-14T21:50:13.5321842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5321931Z outputs = self.model( 2025-08-14T21:50:13.5322286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5322384Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5322704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5322804Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5323115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5323216Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5323543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5323668Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5324000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5324118Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5324493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5324667Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5324680Z 2025-08-14T21:50:13.5324814Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5325079Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5325164Z return mod(**inputs) 2025-08-14T21:50:13.5325511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5325603Z outputs = self.model( 2025-08-14T21:50:13.5325957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5326050Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5326376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5326466Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5326757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5326859Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5327176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5327308Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5327627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5327755Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5328134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5328342Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5328355Z 2025-08-14T21:50:13.5328560Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5328656Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5328808Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5328911Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5329004Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5329104Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5329199Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5329293Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5329431Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5329687Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5329770Z return mod(**inputs) 2025-08-14T21:50:13.5330987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5331084Z outputs = self.model( 2025-08-14T21:50:13.5331428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5331526Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5331879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5331984Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5332267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5332371Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5332705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5332842Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5333169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5333288Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5333660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5333831Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5333867Z 2025-08-14T21:50:13.5333998Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5334255Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5334360Z return mod(**inputs) 2025-08-14T21:50:13.5334683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5334780Z outputs = self.model( 2025-08-14T21:50:13.5335105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5335197Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5335526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5335617Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5335902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5336004Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5336322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5336463Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5336779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5336905Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5337274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5337408Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5337421Z 2025-08-14T21:50:13.5337525Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5337618Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5337747Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5338004Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5338087Z return mod(**inputs) 2025-08-14T21:50:13.5338417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5338526Z outputs = self.model( 2025-08-14T21:50:13.5338845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5338943Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5339280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5339370Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5339659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5339759Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5340082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:50:13.5340234Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5340247Z 2025-08-14T21:50:13.5340344Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5340442Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5340534Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5340629Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5340726Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5340819Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5340920Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5341011Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5341138Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5341422Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5341508Z return mod(**inputs) 2025-08-14T21:50:13.5341850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5341943Z outputs = self.model( 2025-08-14T21:50:13.5342266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5342364Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5342686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5347021Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5347318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5347464Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5347783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5347916Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5348235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5348368Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5349113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5349282Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5349295Z 2025-08-14T21:50:13.5349434Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5349686Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5349776Z return mod(**inputs) 2025-08-14T21:50:13.5350099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5350184Z outputs = self.model( 2025-08-14T21:50:13.5350515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5350668Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5350990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5351088Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5351400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5351507Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5351822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5351953Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5352268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5352394Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5352763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5352893Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5352906Z 2025-08-14T21:50:13.5353009Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5353103Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5353202Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5353294Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5353385Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5353516Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5353606Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5353695Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5353857Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5354106Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5354190Z return mod(**inputs) 2025-08-14T21:50:13.5354518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5354598Z outputs = self.model( 2025-08-14T21:50:13.5354923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5355012Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5355330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5355430Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5355708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5355811Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5356130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5356262Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5356582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5356695Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5357063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5357231Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5357244Z 2025-08-14T21:50:13.5357448Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5357728Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5357840Z return mod(**inputs) 2025-08-14T21:50:13.5358187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5358281Z outputs = self.model( 2025-08-14T21:50:13.5358602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5358701Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5359044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5359133Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5359418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5359516Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5359836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5359972Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5360285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5360408Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5360777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5360909Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5360922Z 2025-08-14T21:50:13.5361055Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5361149Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5361342Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5361622Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5361705Z return mod(**inputs) 2025-08-14T21:50:13.5362035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5362119Z outputs = self.model( 2025-08-14T21:50:13.5362440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5362540Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5362858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5362953Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5363234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5363331Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5363659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:50:13.5363805Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5363818Z 2025-08-14T21:50:13.5363914Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5364012Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5364103Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5364197Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5364290Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5364380Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5364475Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5364566Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5364692Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5364943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5365024Z return mod(**inputs) 2025-08-14T21:50:13.5365346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5365448Z outputs = self.model( 2025-08-14T21:50:13.5365769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5365864Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5366202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5366292Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5366573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5366673Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5366994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5367115Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5367433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5367552Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5367920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5368086Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5368098Z 2025-08-14T21:50:13.5368222Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5368492Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5368575Z return mod(**inputs) 2025-08-14T21:50:13.5368893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5368996Z outputs = self.model( 2025-08-14T21:50:13.5369318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5369406Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5369725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5369811Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5370090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5370191Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5370506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5370625Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5370949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5371065Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5371439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5371566Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5371579Z 2025-08-14T21:50:13.5371675Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5375926Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5376022Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5376121Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5376211Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5376334Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5376446Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5376540Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5376666Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5376953Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5377036Z return mod(**inputs) 2025-08-14T21:50:13.5377356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5377441Z outputs = self.model( 2025-08-14T21:50:13.5377786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5377884Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5378201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5378291Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5378581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5378678Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5379002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5379129Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5379442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5379560Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5379926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5380107Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5380128Z 2025-08-14T21:50:13.5380277Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5380533Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5380615Z return mod(**inputs) 2025-08-14T21:50:13.5380944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5381026Z outputs = self.model( 2025-08-14T21:50:13.5381348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5381438Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5381753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5381847Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5382126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5382231Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5382549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5382677Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5383000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5383114Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5383479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5383613Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5383628Z 2025-08-14T21:50:13.5383720Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5383817Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5383944Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5384192Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5384300Z return mod(**inputs) 2025-08-14T21:50:13.5384621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5384702Z outputs = self.model( 2025-08-14T21:50:13.5385027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5385139Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5385461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5385551Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5385825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5385927Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5386243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:50:13.5386465Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5386478Z 2025-08-14T21:50:13.5386571Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5386672Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5386816Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5386910Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5387002Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5387098Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5387213Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5387304Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5387437Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5387709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5387795Z return mod(**inputs) 2025-08-14T21:50:13.5388118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5388201Z outputs = self.model( 2025-08-14T21:50:13.5388530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5388620Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5388937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5389029Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5389304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5389406Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5389723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5389845Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5390166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5390280Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5390649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5390810Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5390825Z 2025-08-14T21:50:13.5390951Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5391206Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5391289Z return mod(**inputs) 2025-08-14T21:50:13.5391614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5391722Z outputs = self.model( 2025-08-14T21:50:13.5392039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5392130Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5392477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5392567Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5392851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5392949Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5393273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5393393Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5393715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5393835Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5394199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5394331Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5394349Z 2025-08-14T21:50:13.5394445Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5394539Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5394662Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5394753Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5394844Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5394968Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5395062Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5395153Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5395293Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5395541Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5395626Z return mod(**inputs) 2025-08-14T21:50:13.5395945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5396031Z outputs = self.model( 2025-08-14T21:50:13.5396356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5396446Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5396763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5396859Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5397137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5397240Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5397556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5397688Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5398015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5398134Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5398510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5398667Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5398682Z 2025-08-14T21:50:13.5398807Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5399078Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5399159Z return mod(**inputs) 2025-08-14T21:50:13.5399478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5399568Z outputs = self.model( 2025-08-14T21:50:13.5399912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5400009Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5400328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5400414Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5400698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5409203Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5409645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5409795Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5410228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5410370Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5410876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5411057Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5411076Z 2025-08-14T21:50:13.5411176Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5411298Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5411445Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5411771Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5411854Z return mod(**inputs) 2025-08-14T21:50:13.5412305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5412391Z outputs = self.model( 2025-08-14T21:50:13.5412793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5412885Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5413208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5413301Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5413581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5413677Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5413997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:50:13.5414143Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5414156Z 2025-08-14T21:50:13.5414252Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5414343Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5414437Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5414531Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5414623Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5414717Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5414811Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5414901Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5415028Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5417388Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5417496Z return mod(**inputs) 2025-08-14T21:50:13.5417824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5417904Z outputs = self.model( 2025-08-14T21:50:13.5418246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5418339Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5418655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5418744Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5419025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5419123Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5419450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5419569Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5419886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5420010Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5420375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5420562Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5420575Z 2025-08-14T21:50:13.5420700Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5420944Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5421051Z return mod(**inputs) 2025-08-14T21:50:13.5421375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5421457Z outputs = self.model( 2025-08-14T21:50:13.5421778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5421870Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5422193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5422279Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5422559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5422662Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5422983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:50:13.5423109Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:13.5423427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5423542Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5423918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5424049Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5424061Z 2025-08-14T21:50:13.5424167Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5424259Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5424348Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5424443Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5424536Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5424625Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5424724Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5424837Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5424964Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5425217Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5425296Z return mod(**inputs) 2025-08-14T21:50:13.5425664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5425748Z outputs = self.model( 2025-08-14T21:50:13.5426069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5426162Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5426480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5426571Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5426852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5426950Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5427269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5427401Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5427720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5427864Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5428226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:13.5428411Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:13.5428424Z 2025-08-14T21:50:13.5428551Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5428800Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5428888Z return mod(**inputs) 2025-08-14T21:50:13.5429208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5429293Z outputs = self.model( 2025-08-14T21:50:13.5429618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5429709Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5430107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5430219Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5430521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5430626Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5430940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:50:13.5431077Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:13.5431394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:50:13.5431509Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:13.5431884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:13.5432012Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:13.5432026Z 2025-08-14T21:50:13.5432121Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5432220Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5432380Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5432631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5432710Z return mod(**inputs) 2025-08-14T21:50:13.5433030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:50:13.5433143Z outputs = self.model( 2025-08-14T21:50:13.5433460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:50:13.5433549Z decoder_outputs = self.decoder( 2025-08-14T21:50:13.5433872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:50:13.5433961Z layer_outputs = decoder_layer( 2025-08-14T21:50:13.5434240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:13.5434361Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:13.5434689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:50:13.5434839Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:13.5434851Z 2025-08-14T21:50:13.5434945Z cudagraph partition due to non gpu ops 2025-08-14T21:50:13.5435073Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5435321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5435421Z return mod(**inputs) 2025-08-14T21:50:13.5435742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1422, in forward 2025-08-14T21:50:13.5435860Z lm_logits = self.lm_head(outputs[0]) 2025-08-14T21:50:13.5435873Z 2025-08-14T21:50:13.5435994Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:13.5436244Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:13.5436324Z return mod(**inputs) 2025-08-14T21:50:13.5436647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1429, in forward 2025-08-14T21:50:13.5436858Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:50:13.5436871Z 2025-08-14T21:50:26.0112903Z Compilation time (from dynamo_timed): 41.127178246 2025-08-14T21:50:26.0253181Z pass 2025-08-14T21:50:26.0253635Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:50:26.0254650Z TIMING: _recursive_pre_grad_passes:0.13063 _recursive_joint_graph_passes:1.50498 _recursive_post_grad_passes:0.20361 async_compile.wait:1.0299 code_gen:10.84992 inductor_compile:17.45197 backend_compile:33.24252 gc:0.00048 entire_frame_compile:41.12718 total_wall_time:41.12718 2025-08-14T21:50:26.0255841Z STATS: call_* op count: 1014 | FakeTensorMode.__torch_dispatch__:62443 | FakeTensor.__torch_dispatch__:9034 | ProxyTorchDispatchMode.__torch_dispatch__:13993 2025-08-14T21:50:26.0256478Z Dynamo produced 1 graphs covering 1014 ops with 0 graph breaks (0 unique) 2025-08-14T21:50:33.1780445Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:50:33.1781658Z from pkg_resources import resource_filename 2025-08-14T21:50:33.9933693Z 2025-08-14T21:50:38.4495409Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:50:38.4495774Z loading model: 0it [00:04, ?it/s] 2025-08-14T21:50:38.4523959Z cpu eval MBartForCausalLM 2025-08-14T21:50:40.8333511Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:50:41.9846478Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:50:43.0804374Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:50:57.6574556Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6575201Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6575531Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6575917Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6576255Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6576534Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6576902Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6577247Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6577567Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6577880Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6582434Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6582674Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6582921Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6583168Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6583414Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6583651Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6583905Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6584157Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6584390Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6584793Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6585270Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6585750Z return mod(**inputs) 2025-08-14T21:50:57.6586436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6587047Z outputs = self.model.decoder( 2025-08-14T21:50:57.6587602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6588087Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6588583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6589058Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6589571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6590093Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6590611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6591130Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6591823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:57.6592657Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:57.6592903Z 2025-08-14T21:50:57.6593037Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6593487Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6593886Z return mod(**inputs) 2025-08-14T21:50:57.6594412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6595103Z outputs = self.model.decoder( 2025-08-14T21:50:57.6595656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6596197Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6596792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6597332Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6598044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6598594Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6599170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6599690Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6600239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:57.6600814Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:57.6601021Z 2025-08-14T21:50:57.6601135Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6601537Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6601858Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6602401Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6602869Z return mod(**inputs) 2025-08-14T21:50:57.6603345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6603840Z outputs = self.model.decoder( 2025-08-14T21:50:57.6604321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6604834Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6605272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6605892Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6606391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:57.6607007Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:57.6611790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:57.6612290Z return self.act(input) 2025-08-14T21:50:57.6612430Z 2025-08-14T21:50:57.6612577Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6612858Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6613142Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6613432Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6613749Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6614039Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6614348Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6614611Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6614924Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6615457Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6615902Z return mod(**inputs) 2025-08-14T21:50:57.6616391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6616874Z outputs = self.model.decoder( 2025-08-14T21:50:57.6617354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6617826Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6618253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6618702Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6619193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6619736Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6620247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6620760Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6621385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:57.6622070Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:57.6622313Z 2025-08-14T21:50:57.6622443Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6622886Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6623286Z return mod(**inputs) 2025-08-14T21:50:57.6623744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6624233Z outputs = self.model.decoder( 2025-08-14T21:50:57.6624712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6625185Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6625658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6626110Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6626715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6627262Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6627776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6628321Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6628872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:57.6629449Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:57.6629661Z 2025-08-14T21:50:57.6629760Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6630020Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6630299Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6630740Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6631144Z return mod(**inputs) 2025-08-14T21:50:57.6631594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6632082Z outputs = self.model.decoder( 2025-08-14T21:50:57.6632556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6633042Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6633474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6633921Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6634410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:57.6634945Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:57.6635432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:57.6635915Z return self.act(input) 2025-08-14T21:50:57.6644411Z 2025-08-14T21:50:57.6644531Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6644821Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6645110Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6645395Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6645671Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6646940Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6647245Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6647531Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6647857Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6648339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6649110Z return mod(**inputs) 2025-08-14T21:50:57.6649567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6650059Z outputs = self.model.decoder( 2025-08-14T21:50:57.6652672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6653163Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6653592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6654049Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6654587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6655101Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6655611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6656122Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6656749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:57.6657354Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:57.6657650Z 2025-08-14T21:50:57.6657785Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6658239Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6658651Z return mod(**inputs) 2025-08-14T21:50:57.6659101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6659596Z outputs = self.model.decoder( 2025-08-14T21:50:57.6660074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6660550Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6660988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6661442Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6661933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6662442Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6662958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6663468Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6664015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:57.6664588Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:57.6664854Z 2025-08-14T21:50:57.6664953Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6665293Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6665573Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6666018Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6666428Z return mod(**inputs) 2025-08-14T21:50:57.6666911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6667404Z outputs = self.model.decoder( 2025-08-14T21:50:57.6667880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6668365Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6668845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6669299Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6669792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:57.6670334Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:57.6670812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:57.6671238Z return self.act(input) 2025-08-14T21:50:57.6671374Z 2025-08-14T21:50:57.6671485Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6671730Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6671983Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6672238Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6672477Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6672725Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6672973Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6673217Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6673517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6673967Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6674370Z return mod(**inputs) 2025-08-14T21:50:57.6674838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6675327Z outputs = self.model.decoder( 2025-08-14T21:50:57.6675801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6676290Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6676720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6677172Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6677665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6678171Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6678683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6679249Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6684050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:57.6684657Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:57.6684900Z 2025-08-14T21:50:57.6685036Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6685488Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6685896Z return mod(**inputs) 2025-08-14T21:50:57.6686349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6686839Z outputs = self.model.decoder( 2025-08-14T21:50:57.6687318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6687799Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6688236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6688729Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6689225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6689737Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6690291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6690810Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6691366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:57.6691931Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:57.6692145Z 2025-08-14T21:50:57.6692248Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6692509Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6692789Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6693236Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6693651Z return mod(**inputs) 2025-08-14T21:50:57.6694218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6694704Z outputs = self.model.decoder( 2025-08-14T21:50:57.6695176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6695683Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6696109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6696592Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6697082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:57.6697620Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:57.6698096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:57.6698531Z return self.act(input) 2025-08-14T21:50:57.6698664Z 2025-08-14T21:50:57.6698768Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6699021Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6699265Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6699509Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6699762Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6699995Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6700240Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6700484Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6700764Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6701216Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6701618Z return mod(**inputs) 2025-08-14T21:50:57.6702064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6702556Z outputs = self.model.decoder( 2025-08-14T21:50:57.6703031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6703516Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6703950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6704400Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6704889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6705403Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6705942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6706454Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6707007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:57.6707623Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:57.6707863Z 2025-08-14T21:50:57.6707993Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6708492Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6713135Z return mod(**inputs) 2025-08-14T21:50:57.6713585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6714069Z outputs = self.model.decoder( 2025-08-14T21:50:57.6714549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6715021Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6715467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6715924Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6716419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6717005Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6717516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6718057Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6718616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:57.6719180Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:57.6719397Z 2025-08-14T21:50:57.6719495Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6719751Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6720031Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6720477Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6720882Z return mod(**inputs) 2025-08-14T21:50:57.6721425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6721906Z outputs = self.model.decoder( 2025-08-14T21:50:57.6722378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6722914Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6723416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6723867Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6724355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:57.6724892Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:57.6725368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:57.6725789Z return self.act(input) 2025-08-14T21:50:57.6725926Z 2025-08-14T21:50:57.6726029Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6726272Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6726526Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6726772Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6727017Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6727280Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6727524Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6727770Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6728043Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6728493Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6728917Z return mod(**inputs) 2025-08-14T21:50:57.6729370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6729866Z outputs = self.model.decoder( 2025-08-14T21:50:57.6730342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6730826Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6731251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6731703Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6732189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6732701Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6733203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6733712Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6734311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:57.6734904Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:57.6735172Z 2025-08-14T21:50:57.6735302Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6735750Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6736152Z return mod(**inputs) 2025-08-14T21:50:57.6736591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6737078Z outputs = self.model.decoder( 2025-08-14T21:50:57.6741858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6742354Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6742777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6743237Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6743724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6744233Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6744755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6745264Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6745818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:57.6746382Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:57.6746588Z 2025-08-14T21:50:57.6746685Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6746940Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6747216Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6747655Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6748056Z return mod(**inputs) 2025-08-14T21:50:57.6748505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6749397Z outputs = self.model.decoder( 2025-08-14T21:50:57.6749871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6750353Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6750809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6751257Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6751793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:57.6752405Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:57.6752882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:57.6753304Z return self.act(input) 2025-08-14T21:50:57.6753441Z 2025-08-14T21:50:57.6753548Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6753798Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6754046Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6754302Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6754545Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6754781Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6755029Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6755272Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6755543Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6756031Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6756436Z return mod(**inputs) 2025-08-14T21:50:57.6756893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6757405Z outputs = self.model.decoder( 2025-08-14T21:50:57.6757928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6758411Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6758833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6759279Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6759765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6760278Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6760775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6761392Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6761941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:57.6762541Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:57.6762770Z 2025-08-14T21:50:57.6762900Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6763346Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6763748Z return mod(**inputs) 2025-08-14T21:50:57.6764196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6764689Z outputs = self.model.decoder( 2025-08-14T21:50:57.6765165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6765648Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6766072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6774867Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6775513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6776193Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6776782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6777291Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6777840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:57.6778403Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:57.6778616Z 2025-08-14T21:50:57.6778716Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6778977Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6779259Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6779696Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6780096Z return mod(**inputs) 2025-08-14T21:50:57.6780547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6781143Z outputs = self.model.decoder( 2025-08-14T21:50:57.6781625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6782129Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6792549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6793036Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6793640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:57.6794193Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:57.6794693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:57.6795135Z return self.act(input) 2025-08-14T21:50:57.6795349Z 2025-08-14T21:50:57.6795467Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6795855Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6796118Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6796366Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6796611Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6796861Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6797109Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6797347Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6797635Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6798102Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6798510Z return mod(**inputs) 2025-08-14T21:50:57.6798977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6799480Z outputs = self.model.decoder( 2025-08-14T21:50:57.6799966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6800447Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6800893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6801446Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6801946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6802463Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6803028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6803547Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6804105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:57.6804742Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:57.6804993Z 2025-08-14T21:50:57.6805128Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6805585Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6805991Z return mod(**inputs) 2025-08-14T21:50:57.6806454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6806946Z outputs = self.model.decoder( 2025-08-14T21:50:57.6807424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6807907Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6808346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6808800Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6809283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6809856Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6816795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6817310Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6817890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:57.6818468Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:57.6818673Z 2025-08-14T21:50:57.6818783Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6819037Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6819334Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6819789Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6820203Z return mod(**inputs) 2025-08-14T21:50:57.6820650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6821136Z outputs = self.model.decoder( 2025-08-14T21:50:57.6821611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6822092Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6822529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6822979Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6823469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:57.6823999Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:57.6824618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:57.6825054Z return self.act(input) 2025-08-14T21:50:57.6825198Z 2025-08-14T21:50:57.6825307Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6825558Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6825810Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6826062Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6826309Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6826562Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6826842Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6827087Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6827371Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6827827Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6828237Z return mod(**inputs) 2025-08-14T21:50:57.6828706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6829200Z outputs = self.model.decoder( 2025-08-14T21:50:57.6829681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6830156Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6830602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6831060Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6831549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6832057Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6832571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6833084Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6833633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:57.6834261Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:57.6834504Z 2025-08-14T21:50:57.6834657Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6835106Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6835504Z return mod(**inputs) 2025-08-14T21:50:57.6835957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6836442Z outputs = self.model.decoder( 2025-08-14T21:50:57.6836918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6837392Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6837824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6838276Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6838813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6843566Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6844084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6844594Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6845139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:57.6845718Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:57.6845927Z 2025-08-14T21:50:57.6846034Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6846294Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6846573Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6847019Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6847432Z return mod(**inputs) 2025-08-14T21:50:57.6847882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6848376Z outputs = self.model.decoder( 2025-08-14T21:50:57.6849320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6849865Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6850450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6850944Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6851435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:57.6851976Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:57.6852465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:57.6852899Z return self.act(input) 2025-08-14T21:50:57.6853036Z 2025-08-14T21:50:57.6853147Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6853452Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6853776Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6854026Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6854269Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6854511Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6854752Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6854989Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6855269Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6855716Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6856150Z return mod(**inputs) 2025-08-14T21:50:57.6856600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6857118Z outputs = self.model.decoder( 2025-08-14T21:50:57.6857589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6858062Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6858487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6858938Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6859419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6859935Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6860441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6860950Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6861507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:57.6862112Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:57.6862347Z 2025-08-14T21:50:57.6862486Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6862924Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6863323Z return mod(**inputs) 2025-08-14T21:50:57.6863777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6864262Z outputs = self.model.decoder( 2025-08-14T21:50:57.6864729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6865212Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6865654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6866101Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6866610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6867130Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6867649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6872398Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6872948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:57.6873529Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:57.6873733Z 2025-08-14T21:50:57.6873846Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6874105Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6874402Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6874853Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6875259Z return mod(**inputs) 2025-08-14T21:50:57.6875753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6876242Z outputs = self.model.decoder( 2025-08-14T21:50:57.6876725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6877199Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6877658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6878105Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6878611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:57.6879141Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:57.6879626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:57.6880052Z return self.act(input) 2025-08-14T21:50:57.6880187Z 2025-08-14T21:50:57.6880282Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6880532Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6880789Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6881041Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6881359Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6881604Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6881845Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6882080Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6882415Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6882947Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6883353Z return mod(**inputs) 2025-08-14T21:50:57.6883805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6884292Z outputs = self.model.decoder( 2025-08-14T21:50:57.6884799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6885279Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6885708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6886148Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6886631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6887144Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6887679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6888178Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6888729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:57.6889326Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:57.6889599Z 2025-08-14T21:50:57.6889734Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6890169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6890578Z return mod(**inputs) 2025-08-14T21:50:57.6891027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6891509Z outputs = self.model.decoder( 2025-08-14T21:50:57.6891984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6892460Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6892889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6893327Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6893805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6894313Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6894843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6895340Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6895921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:57.6896488Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:57.6896750Z 2025-08-14T21:50:57.6896851Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6901350Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6901634Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6902079Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6902483Z return mod(**inputs) 2025-08-14T21:50:57.6902936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6903426Z outputs = self.model.decoder( 2025-08-14T21:50:57.6903897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6904382Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6904819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6905270Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6905745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:57.6906281Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:57.6906768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:57.6907184Z return self.act(input) 2025-08-14T21:50:57.6907326Z 2025-08-14T21:50:57.6907428Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6907677Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6907931Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6908167Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6908415Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6908664Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6908899Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6909167Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6909446Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6909885Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6910286Z return mod(**inputs) 2025-08-14T21:50:57.6910765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6911296Z outputs = self.model.decoder( 2025-08-14T21:50:57.6911835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6912313Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6912738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6913185Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6913664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6914176Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6914683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6915181Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6915792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:57.6916419Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:57.6916654Z 2025-08-14T21:50:57.6916788Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6917243Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6917644Z return mod(**inputs) 2025-08-14T21:50:57.6918093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6918577Z outputs = self.model.decoder( 2025-08-14T21:50:57.6919045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6919531Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6919964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6920409Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6920894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:57.6921498Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:57.6922004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:57.6922506Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:57.6923057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:57.6923625Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:57.6923832Z 2025-08-14T21:50:57.6923943Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6924195Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6924481Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6924930Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6925331Z return mod(**inputs) 2025-08-14T21:50:57.6925839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:50:57.6934836Z outputs = self.model.decoder( 2025-08-14T21:50:57.6935482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:57.6936126Z layer_outputs = decoder_layer( 2025-08-14T21:50:57.6936556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:57.6937005Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:57.6937514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:57.6938055Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:57.6938542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:57.6938966Z return self.act(input) 2025-08-14T21:50:57.6939103Z 2025-08-14T21:50:57.6939199Z cudagraph partition due to non gpu ops 2025-08-14T21:50:57.6939482Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6939930Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6940366Z return mod(**inputs) 2025-08-14T21:50:57.6942950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1880, in forward 2025-08-14T21:50:57.6943434Z logits = self.lm_head(outputs[0]) 2025-08-14T21:50:57.6943599Z 2025-08-14T21:50:57.6943735Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:57.6944167Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:57.6944641Z return mod(**inputs) 2025-08-14T21:50:57.6945089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1886, in forward 2025-08-14T21:50:57.6945681Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:50:57.6945935Z 2025-08-14T21:51:05.5840839Z Compilation time (from dynamo_timed): 19.927113511 2025-08-14T21:51:05.6222984Z pass 2025-08-14T21:51:05.6223770Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:51:05.6225394Z TIMING: _recursive_pre_grad_passes:0.05204 _recursive_joint_graph_passes:0.80717 _recursive_post_grad_passes:0.10926 async_compile.wait:0.91061 code_gen:6.71871 inductor_compile:10.4873 backend_compile:16.84005 gc:0.00028 entire_frame_compile:19.92711 total_wall_time:19.92711 2025-08-14T21:51:05.6226664Z STATS: call_* op count: 373 | FakeTensorMode.__torch_dispatch__:24996 | FakeTensor.__torch_dispatch__:4012 | ProxyTorchDispatchMode.__torch_dispatch__:5664 2025-08-14T21:51:05.6227341Z Dynamo produced 1 graphs covering 373 ops with 0 graph breaks (0 unique) 2025-08-14T21:51:12.0804864Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:51:12.0805939Z from pkg_resources import resource_filename 2025-08-14T21:51:12.7936017Z 2025-08-14T21:51:20.9548749Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:51:20.9549186Z loading model: 0it [00:08, ?it/s] 2025-08-14T21:51:20.9592237Z cpu eval MBartForConditionalGeneration 2025-08-14T21:51:25.9214413Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:51:28.2797808Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:51:30.5958871Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:52:02.2841117Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2841822Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2842538Z return mod(**inputs) 2025-08-14T21:52:02.2847290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1436, in forward 2025-08-14T21:52:02.2847892Z decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id) 2025-08-14T21:52:02.2848589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 76, in shift_tokens_right 2025-08-14T21:52:02.2849559Z index_of_eos = (prev_output_tokens.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1) 2025-08-14T21:52:02.2849836Z 2025-08-14T21:52:02.2849952Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2850209Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2850472Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2850733Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2850991Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2851263Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2851514Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2851758Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2852007Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2852247Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2852491Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2852745Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2852985Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2853228Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2853539Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2853781Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2854022Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2854271Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2854581Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2854894Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2855431Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2855839Z return mod(**inputs) 2025-08-14T21:52:02.2856604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2857384Z outputs = self.model( 2025-08-14T21:52:02.2857941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2858433Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2858913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2859395Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2859836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2860305Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2860788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.2861299Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.2861807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.2862323Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.2862882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.2863495Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.2863734Z 2025-08-14T21:52:02.2863873Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2864322Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2864732Z return mod(**inputs) 2025-08-14T21:52:02.2865279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2865765Z outputs = self.model( 2025-08-14T21:52:02.2866208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2866738Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2867225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2867933Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2868620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2869357Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2870026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.2870761Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.2871344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.2876199Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.2876753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.2877333Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.2877603Z 2025-08-14T21:52:02.2877710Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2877964Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2878242Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2878960Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2879686Z return mod(**inputs) 2025-08-14T21:52:02.2880259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2880909Z outputs = self.model( 2025-08-14T21:52:02.2881598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2882282Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2883114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2883907Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2884446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2885099Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2885937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:52:02.2886727Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.2887231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.2887646Z return self.act(input) 2025-08-14T21:52:02.2887792Z 2025-08-14T21:52:02.2887894Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2888154Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2888408Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2888651Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2888899Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2889143Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2889376Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2889743Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2890036Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2890473Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2890939Z return mod(**inputs) 2025-08-14T21:52:02.2891393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2891871Z outputs = self.model( 2025-08-14T21:52:02.2892338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2892822Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2893297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2893783Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2894205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2894653Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2895156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.2895836Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.2896339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.2896980Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.2897793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.2898874Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.2899240Z 2025-08-14T21:52:02.2899459Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2900039Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2909092Z return mod(**inputs) 2025-08-14T21:52:02.2909909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2910612Z outputs = self.model( 2025-08-14T21:52:02.2911258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2911740Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2912300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2913051Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2913683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2914297Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2914918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.2917620Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.2918118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.2918634Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.2919191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.2919775Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.2919982Z 2025-08-14T21:52:02.2920092Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2920359Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2920657Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2921092Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2921571Z return mod(**inputs) 2025-08-14T21:52:02.2922075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2922557Z outputs = self.model( 2025-08-14T21:52:02.2922996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2923481Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2924092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2924812Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2925515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2926228Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2926899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:52:02.2927547Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.2928190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.2928620Z return self.act(input) 2025-08-14T21:52:02.2928757Z 2025-08-14T21:52:02.2928910Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2929198Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2929446Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2929696Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2930033Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2930384Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2930630Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2930866Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2931153Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2931652Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2932051Z return mod(**inputs) 2025-08-14T21:52:02.2932501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2932981Z outputs = self.model( 2025-08-14T21:52:02.2933437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2933913Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2934419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2934920Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2935351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2935793Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2936279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.2936784Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.2937288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.2937790Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.2938347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.2938949Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.2939185Z 2025-08-14T21:52:02.2939318Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2939772Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2940181Z return mod(**inputs) 2025-08-14T21:52:02.2940630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2941142Z outputs = self.model( 2025-08-14T21:52:02.2941591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2942077Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2942569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2943122Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2943617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2944070Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2949164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.2949681Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.2950185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.2950793Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.2951713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.2952487Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.2952765Z 2025-08-14T21:52:02.2952962Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2953403Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2954005Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2954751Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2955282Z return mod(**inputs) 2025-08-14T21:52:02.2955912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2956601Z outputs = self.model( 2025-08-14T21:52:02.2957052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2957621Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2958131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2958631Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2959188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2959659Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2960145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:52:02.2960690Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.2961265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.2961686Z return self.act(input) 2025-08-14T21:52:02.2961833Z 2025-08-14T21:52:02.2961934Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2962194Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2962439Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2962688Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2962939Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2963186Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2963540Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2963785Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2964065Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2964510Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2964928Z return mod(**inputs) 2025-08-14T21:52:02.2965501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2966185Z outputs = self.model( 2025-08-14T21:52:02.2966742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2967236Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2967757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2968230Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2968666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2969116Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2969648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.2970159Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.2970659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.2971169Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.2971719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.2972319Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.2972559Z 2025-08-14T21:52:02.2972689Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2973158Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2977755Z return mod(**inputs) 2025-08-14T21:52:02.2978263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2978740Z outputs = self.model( 2025-08-14T21:52:02.2979184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2979665Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2980136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2980619Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2981041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2981503Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2981989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.2982496Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.2982991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.2983508Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.2984071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.2984636Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.2984854Z 2025-08-14T21:52:02.2984957Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2985216Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2985503Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2985941Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2986342Z return mod(**inputs) 2025-08-14T21:52:02.2986793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2987263Z outputs = self.model( 2025-08-14T21:52:02.2987812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2988355Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2988838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2989315Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2989777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2990244Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2990733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:52:02.2991276Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.2991764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.2992192Z return self.act(input) 2025-08-14T21:52:02.2992363Z 2025-08-14T21:52:02.2992463Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2992717Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2992965Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2993203Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2993446Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2993689Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2993932Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2994195Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.2994474Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.2994920Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.2995339Z return mod(**inputs) 2025-08-14T21:52:02.2995790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.2996266Z outputs = self.model( 2025-08-14T21:52:02.2996763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.2997245Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.2997719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.2998198Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.2998621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.2999073Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.2999557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3000065Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3000561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3001075Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3001732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3006538Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3006785Z 2025-08-14T21:52:02.3006917Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3007369Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3007779Z return mod(**inputs) 2025-08-14T21:52:02.3008226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3008713Z outputs = self.model( 2025-08-14T21:52:02.3009192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3009679Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3010146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3010626Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3011083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3011529Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3012015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3012521Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3013029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3013534Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3014095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3014667Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3014875Z 2025-08-14T21:52:02.3014981Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3015231Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3015517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3015965Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3016403Z return mod(**inputs) 2025-08-14T21:52:02.3016928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3017498Z outputs = self.model( 2025-08-14T21:52:02.3017940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3018421Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3018902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3019382Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3019803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3020255Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3020742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:52:02.3021281Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3021757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3022180Z return self.act(input) 2025-08-14T21:52:02.3022319Z 2025-08-14T21:52:02.3022425Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3022667Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3022920Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3023160Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3023394Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3023640Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3023882Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3024128Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3024399Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3024850Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3025254Z return mod(**inputs) 2025-08-14T21:52:02.3025703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3026178Z outputs = self.model( 2025-08-14T21:52:02.3026659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3027144Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3027615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3028124Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3028561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3029004Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3029494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3030004Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3030509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3031020Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3040135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3040965Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3041335Z 2025-08-14T21:52:02.3041477Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3041915Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3042351Z return mod(**inputs) 2025-08-14T21:52:02.3042807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3043308Z outputs = self.model( 2025-08-14T21:52:02.3043767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3044250Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3044727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3045201Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3045636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3046171Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3046684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3047188Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3047687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3048202Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3049048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3062460Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3062803Z 2025-08-14T21:52:02.3062936Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3063212Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3063517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3063989Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3064415Z return mod(**inputs) 2025-08-14T21:52:02.3064953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3065480Z outputs = self.model( 2025-08-14T21:52:02.3065945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3066576Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3067058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3067543Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3067983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3068471Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3068982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:52:02.3069537Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3070034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3070460Z return self.act(input) 2025-08-14T21:52:02.3070609Z 2025-08-14T21:52:02.3070714Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3070981Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3071229Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3071479Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3071730Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3071971Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3072218Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3072468Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3072761Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3073252Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3073661Z return mod(**inputs) 2025-08-14T21:52:02.3074131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3074652Z outputs = self.model( 2025-08-14T21:52:02.3081522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3082021Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3082512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3082994Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3083436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3083896Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3084381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3084897Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3085404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3085919Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3086477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3087088Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3087335Z 2025-08-14T21:52:02.3087470Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3087922Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3088322Z return mod(**inputs) 2025-08-14T21:52:02.3088783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3089339Z outputs = self.model( 2025-08-14T21:52:02.3089846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3090341Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3090857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3091343Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3091770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3092243Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3092737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3093254Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3093781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3094319Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3094883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3095461Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3095675Z 2025-08-14T21:52:02.3095776Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3096035Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3096324Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3096771Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3097183Z return mod(**inputs) 2025-08-14T21:52:02.3097681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3098156Z outputs = self.model( 2025-08-14T21:52:02.3098617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3099131Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3099612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3100092Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3100526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3100988Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3101471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:52:02.3102023Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3102512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3102944Z return self.act(input) 2025-08-14T21:52:02.3103082Z 2025-08-14T21:52:02.3103182Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3103438Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3103692Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3108134Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3108396Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3108655Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3108909Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3109149Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3109446Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3109901Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3110304Z return mod(**inputs) 2025-08-14T21:52:02.3110769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3111256Z outputs = self.model( 2025-08-14T21:52:02.3111702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3112220Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3112705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3113189Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3113645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3114098Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3114589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3115104Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3115603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3116124Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3116686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3117296Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3117539Z 2025-08-14T21:52:02.3117672Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3118119Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3118588Z return mod(**inputs) 2025-08-14T21:52:02.3119096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3119612Z outputs = self.model( 2025-08-14T21:52:02.3120068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3120576Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3121062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3121616Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3122041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3122489Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3123031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3123543Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3124041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3124549Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3125114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3125688Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3125890Z 2025-08-14T21:52:02.3125990Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3126248Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3126535Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3126978Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3127380Z return mod(**inputs) 2025-08-14T21:52:02.3127832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3128310Z outputs = self.model( 2025-08-14T21:52:02.3128754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3129245Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3129746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3130220Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3130651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3131108Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3131625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:52:02.3132159Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3132650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3137326Z return self.act(input) 2025-08-14T21:52:02.3137464Z 2025-08-14T21:52:02.3137566Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3137827Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3138077Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3138330Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3138578Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3138822Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3139076Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3139312Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3139595Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3140051Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3140456Z return mod(**inputs) 2025-08-14T21:52:02.3140964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3141453Z outputs = self.model( 2025-08-14T21:52:02.3141933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3142412Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3142884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3143372Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3143806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3144268Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3144750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3145258Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3145752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3146261Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3146814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3147488Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3147768Z 2025-08-14T21:52:02.3147900Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3148342Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3149078Z return mod(**inputs) 2025-08-14T21:52:02.3149528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3150015Z outputs = self.model( 2025-08-14T21:52:02.3150492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3150978Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3151449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3152057Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3152481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3152933Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3153485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3153990Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3154487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3155002Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3155557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3156244Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3156500Z 2025-08-14T21:52:02.3156603Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3156854Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3157136Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3157573Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3157975Z return mod(**inputs) 2025-08-14T21:52:02.3158420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3158952Z outputs = self.model( 2025-08-14T21:52:02.3159394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3159878Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3160394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3160867Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3161381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3166009Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3166496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:52:02.3167033Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3167527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3167958Z return self.act(input) 2025-08-14T21:52:02.3168098Z 2025-08-14T21:52:02.3168201Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3168459Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3168708Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3168957Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3169207Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3169454Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3169695Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3169932Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3170210Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3170662Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3171069Z return mod(**inputs) 2025-08-14T21:52:02.3171521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3172007Z outputs = self.model( 2025-08-14T21:52:02.3172458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3172938Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3173443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3173921Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3174350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3174791Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3175303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3175809Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3176377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3176944Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3177504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3178105Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3178337Z 2025-08-14T21:52:02.3178465Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3178910Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3179317Z return mod(**inputs) 2025-08-14T21:52:02.3179760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3180241Z outputs = self.model( 2025-08-14T21:52:02.3180720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3181204Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3181689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3182164Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3182597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3183048Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3183523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3184022Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3184516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3185020Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3185568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3186137Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3186336Z 2025-08-14T21:52:02.3186439Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3186686Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3186974Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3187418Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3187816Z return mod(**inputs) 2025-08-14T21:52:02.3188275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3188751Z outputs = self.model( 2025-08-14T21:52:02.3189203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3189688Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3190160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3190646Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3199630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3200222Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3200798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:52:02.3201466Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3201948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3202385Z return self.act(input) 2025-08-14T21:52:02.3202529Z 2025-08-14T21:52:02.3202626Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3202882Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3203133Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3203385Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3203634Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3203885Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3204135Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3204376Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3204651Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3205099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3207635Z return mod(**inputs) 2025-08-14T21:52:02.3208090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3208587Z outputs = self.model( 2025-08-14T21:52:02.3209037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3209541Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3210006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3210499Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3210929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3211381Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3211855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3212361Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3212857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3213363Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3213913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3214516Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3214745Z 2025-08-14T21:52:02.3214883Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3215316Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3215719Z return mod(**inputs) 2025-08-14T21:52:02.3216175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3216650Z outputs = self.model( 2025-08-14T21:52:02.3217093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3217577Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3218047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3218522Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3218988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3219440Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3219992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3220544Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3221085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3221596Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3222152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3222714Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3222924Z 2025-08-14T21:52:02.3223024Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3223533Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3223814Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3224268Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3224727Z return mod(**inputs) 2025-08-14T21:52:02.3225182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3225650Z outputs = self.model( 2025-08-14T21:52:02.3226099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3226615Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3227078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3227582Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3228010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3228463Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3228937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:52:02.3229472Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3229955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3230379Z return self.act(input) 2025-08-14T21:52:02.3230518Z 2025-08-14T21:52:02.3230614Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3230863Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3231116Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3231352Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3231597Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3231836Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3232077Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3232320Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3232602Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3233049Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3233441Z return mod(**inputs) 2025-08-14T21:52:02.3233895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3238629Z outputs = self.model( 2025-08-14T21:52:02.3239077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3239560Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3240035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3240518Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3240973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3241505Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3241989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3242511Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3243017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3243533Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3244095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3244695Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3244941Z 2025-08-14T21:52:02.3245071Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3245526Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3245938Z return mod(**inputs) 2025-08-14T21:52:02.3246388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3246868Z outputs = self.model( 2025-08-14T21:52:02.3247319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3247829Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3248307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3249238Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3249671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3250116Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3250604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:52:02.3251105Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:52:02.3251601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3252136Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3252687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3253293Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3253511Z 2025-08-14T21:52:02.3253612Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3253872Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3254160Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3254607Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3255003Z return mod(**inputs) 2025-08-14T21:52:02.3255459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3255933Z outputs = self.model( 2025-08-14T21:52:02.3256374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:52:02.3256854Z encoder_outputs = self.encoder( 2025-08-14T21:52:02.3257329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:52:02.3257802Z layer_outputs = encoder_layer( 2025-08-14T21:52:02.3258221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3258672Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3259240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:52:02.3259774Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3260284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3260714Z return self.act(input) 2025-08-14T21:52:02.3260850Z 2025-08-14T21:52:02.3260955Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3261213Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3261467Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3261712Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3261948Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3262195Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3262444Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3262678Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3262963Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3267612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3268018Z return mod(**inputs) 2025-08-14T21:52:02.3268469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3268957Z outputs = self.model( 2025-08-14T21:52:02.3269411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3269945Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3270422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3270944Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3271378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3271825Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3272314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3272832Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3273349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3273853Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3274408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3275009Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3275247Z 2025-08-14T21:52:02.3275376Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3275819Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3276228Z return mod(**inputs) 2025-08-14T21:52:02.3276677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3277142Z outputs = self.model( 2025-08-14T21:52:02.3277594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3278157Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3278656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3279142Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3279573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3280027Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3280537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3281052Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3281641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3282173Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3282765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3283335Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3283537Z 2025-08-14T21:52:02.3283641Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3283890Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3284144Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3284389Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3284630Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3284864Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3285108Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3285347Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3285618Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3286060Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3286462Z return mod(**inputs) 2025-08-14T21:52:02.3286909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3287413Z outputs = self.model( 2025-08-14T21:52:02.3287866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3288367Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3288837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3289315Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3289747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3290200Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3290680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3291213Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3291736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3296461Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3297026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3297627Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3297859Z 2025-08-14T21:52:02.3297996Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3298433Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3298835Z return mod(**inputs) 2025-08-14T21:52:02.3299294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3299766Z outputs = self.model( 2025-08-14T21:52:02.3300219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3300711Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3301201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3301679Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3302149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3302601Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3303085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3303623Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3304143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3304652Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3305193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3305758Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3305967Z 2025-08-14T21:52:02.3306069Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3306323Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3306600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3307107Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3307559Z return mod(**inputs) 2025-08-14T21:52:02.3308000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3308475Z outputs = self.model( 2025-08-14T21:52:02.3308948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3309431Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3309923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3310404Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3310837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3311283Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3311799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:52:02.3312340Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3312832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3313249Z return self.act(input) 2025-08-14T21:52:02.3313390Z 2025-08-14T21:52:02.3313486Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3313737Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3313993Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3314230Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3314476Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3314726Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3314964Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3315210Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3315528Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3315995Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3316402Z return mod(**inputs) 2025-08-14T21:52:02.3316860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3317340Z outputs = self.model( 2025-08-14T21:52:02.3317785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3318268Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3318774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3319254Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3319680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3320130Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3320636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3321143Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3325959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3326490Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3327042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3327649Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3327894Z 2025-08-14T21:52:02.3328025Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3328473Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3328866Z return mod(**inputs) 2025-08-14T21:52:02.3329322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3329806Z outputs = self.model( 2025-08-14T21:52:02.3330283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3330755Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3331226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3331725Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3332145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3332589Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3333081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3333592Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3334093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3334601Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3335153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3335785Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3335990Z 2025-08-14T21:52:02.3336088Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3336403Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3336652Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3336891Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3337139Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3337383Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3337620Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3337867Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3338149Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3338600Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3338997Z return mod(**inputs) 2025-08-14T21:52:02.3339451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3339939Z outputs = self.model( 2025-08-14T21:52:02.3340413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3340897Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3341375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3341855Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3342303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3342754Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3343241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3343752Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3344275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3344787Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3345345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3345936Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3346184Z 2025-08-14T21:52:02.3346313Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3346762Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3347161Z return mod(**inputs) 2025-08-14T21:52:02.3347626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3348104Z outputs = self.model( 2025-08-14T21:52:02.3348579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3349460Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3349942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3358740Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3359299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3359890Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3360534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3361315Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3362007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3362684Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3363254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3363820Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3364025Z 2025-08-14T21:52:02.3364122Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3364373Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3364666Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3367323Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3367729Z return mod(**inputs) 2025-08-14T21:52:02.3368182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3368658Z outputs = self.model( 2025-08-14T21:52:02.3369110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3369597Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3370139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3370625Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3371048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3371528Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3372015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:52:02.3372562Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3373043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3373479Z return self.act(input) 2025-08-14T21:52:02.3373617Z 2025-08-14T21:52:02.3373721Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3373970Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3374230Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3374481Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3374718Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3374967Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3375213Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3375457Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3375733Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3376180Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3376613Z return mod(**inputs) 2025-08-14T21:52:02.3377056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3377579Z outputs = self.model( 2025-08-14T21:52:02.3378028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3378508Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3378981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3379547Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3380029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3380476Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3380966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3381478Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3381984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3382483Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3383037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3383634Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3383896Z 2025-08-14T21:52:02.3384052Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3384493Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3384888Z return mod(**inputs) 2025-08-14T21:52:02.3385336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3385805Z outputs = self.model( 2025-08-14T21:52:02.3386249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3386735Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3387237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3387714Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3388144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3388592Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3389094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3389601Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3390111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3390629Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3391175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3391752Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3391954Z 2025-08-14T21:52:02.3392062Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3392317Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3392562Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3392817Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3393066Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3393305Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3393550Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3399288Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3399567Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3400018Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3400467Z return mod(**inputs) 2025-08-14T21:52:02.3400925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3401494Z outputs = self.model( 2025-08-14T21:52:02.3401962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3402450Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3402930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3403424Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3403868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3404330Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3404821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3405352Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3405877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3406382Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3406937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3407542Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3407773Z 2025-08-14T21:52:02.3407912Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3408434Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3408900Z return mod(**inputs) 2025-08-14T21:52:02.3409353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3409838Z outputs = self.model( 2025-08-14T21:52:02.3410309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3410800Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3411272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3411744Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3412196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3412644Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3413178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3413694Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3414217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3414727Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3415281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3415841Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3416049Z 2025-08-14T21:52:02.3416152Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3416408Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3416686Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3417160Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3417561Z return mod(**inputs) 2025-08-14T21:52:02.3418010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3418503Z outputs = self.model( 2025-08-14T21:52:02.3418957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3419440Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3419904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3420386Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3420817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3421266Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3421592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:52:02.3421742Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3422027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3422115Z return self.act(input) 2025-08-14T21:52:02.3422130Z 2025-08-14T21:52:02.3422235Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3422332Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3422426Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3422526Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3422624Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3426944Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3427051Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3427148Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3427286Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3427546Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3427629Z return mod(**inputs) 2025-08-14T21:52:02.3427961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3428080Z outputs = self.model( 2025-08-14T21:52:02.3428403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3428510Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3428855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3428949Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3429235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3429340Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3429665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3429791Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3430108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3430240Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3430610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3430784Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3430797Z 2025-08-14T21:52:02.3430927Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3431177Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3431288Z return mod(**inputs) 2025-08-14T21:52:02.3431611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3431717Z outputs = self.model( 2025-08-14T21:52:02.3432043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3432133Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3432456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3432545Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3432824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3432930Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3433252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3433375Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3433699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3433820Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3434195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3434327Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3434340Z 2025-08-14T21:52:02.3434434Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3434535Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3434628Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3434726Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3434822Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3434917Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3435017Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3435112Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3435245Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3435524Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3435611Z return mod(**inputs) 2025-08-14T21:52:02.3435933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3436021Z outputs = self.model( 2025-08-14T21:52:02.3436362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3436462Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3436782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3449277Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3449709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3449834Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3450191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3450343Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3450676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3450811Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3451197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3451502Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3451516Z 2025-08-14T21:52:02.3451662Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3456162Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3456259Z return mod(**inputs) 2025-08-14T21:52:02.3456598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3456695Z outputs = self.model( 2025-08-14T21:52:02.3457037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3457134Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3457458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3457566Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3457856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3457968Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3458293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3458431Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3458761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3458885Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3459270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3459409Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3459423Z 2025-08-14T21:52:02.3459533Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3459636Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3459773Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3460031Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3460129Z return mod(**inputs) 2025-08-14T21:52:02.3460501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3460599Z outputs = self.model( 2025-08-14T21:52:02.3460924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3461021Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3461389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3461484Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3461776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3461886Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3462210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:52:02.3462370Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3462644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3462734Z return self.act(input) 2025-08-14T21:52:02.3462747Z 2025-08-14T21:52:02.3462856Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3462953Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3463049Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3463154Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3463246Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3463368Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3463461Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3463556Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3463723Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3463976Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3464062Z return mod(**inputs) 2025-08-14T21:52:02.3464395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3464481Z outputs = self.model( 2025-08-14T21:52:02.3464813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3464909Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3465232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3465338Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3465620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3465723Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3466055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3466182Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3466586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3466759Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3467136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3467316Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3467331Z 2025-08-14T21:52:02.3467463Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3467727Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3467814Z return mod(**inputs) 2025-08-14T21:52:02.3468162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3468264Z outputs = self.model( 2025-08-14T21:52:02.3468588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3468682Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3469049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3469140Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3469431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3469532Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3469854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3469990Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3470312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3470433Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3470814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3470953Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3470966Z 2025-08-14T21:52:02.3471075Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3471192Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3471285Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3471388Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3471479Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3471592Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3471692Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3471784Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3471924Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3472176Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3472261Z return mod(**inputs) 2025-08-14T21:52:02.3472595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3472679Z outputs = self.model( 2025-08-14T21:52:02.3473001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3473103Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3473424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3473524Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3473806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3473908Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3474236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3474371Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3474697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3474817Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3475191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3475365Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3475380Z 2025-08-14T21:52:02.3475508Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3475779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3475870Z return mod(**inputs) 2025-08-14T21:52:02.3476190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3476287Z outputs = self.model( 2025-08-14T21:52:02.3476626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3476719Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3477052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3477143Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3477431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3477533Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3477854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3477995Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3478311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3478433Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3478807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3478964Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3478977Z 2025-08-14T21:52:02.3479080Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3479199Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3479327Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3479587Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3479669Z return mod(**inputs) 2025-08-14T21:52:02.3479996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3480092Z outputs = self.model( 2025-08-14T21:52:02.3480416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3480518Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3489270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3489371Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3489762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3489874Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3490315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:52:02.3490492Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3490854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3490966Z return self.act(input) 2025-08-14T21:52:02.3490980Z 2025-08-14T21:52:02.3491085Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3491185Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3491302Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3491399Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3491505Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3491606Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3491704Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3491810Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3492007Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3492339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3492435Z return mod(**inputs) 2025-08-14T21:52:02.3492871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3492996Z outputs = self.model( 2025-08-14T21:52:02.3493448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3493548Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3493946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3494040Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3494321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3494438Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3494762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3494899Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3495290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3495413Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3495871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3496038Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3496074Z 2025-08-14T21:52:02.3496202Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3496461Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3496547Z return mod(**inputs) 2025-08-14T21:52:02.3496879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3496966Z outputs = self.model( 2025-08-14T21:52:02.3497290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3497390Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3497710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3497811Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3498091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3498196Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3498522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3498644Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3498960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3499085Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3499454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3499602Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3499615Z 2025-08-14T21:52:02.3499712Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3499807Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3499910Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3500007Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3500104Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3500226Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3500317Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3500426Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3500553Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3500823Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3500914Z return mod(**inputs) 2025-08-14T21:52:02.3501238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3501331Z outputs = self.model( 2025-08-14T21:52:02.3501655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3501751Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3502080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3502171Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3502453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3502559Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3502878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3503016Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3503362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3503480Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3503877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3504046Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3504059Z 2025-08-14T21:52:02.3504190Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3504437Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3504518Z return mod(**inputs) 2025-08-14T21:52:02.3504849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3504936Z outputs = self.model( 2025-08-14T21:52:02.3505259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3505360Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3505676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3505779Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3506057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3506156Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3506486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3506620Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3506940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3507062Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3507426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3507568Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3507581Z 2025-08-14T21:52:02.3507676Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3507792Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3507929Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3508178Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3508266Z return mod(**inputs) 2025-08-14T21:52:02.3508607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3508691Z outputs = self.model( 2025-08-14T21:52:02.3509016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3509112Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3509429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3509527Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3509890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3509997Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3510370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:52:02.3510522Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3510805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3510914Z return self.act(input) 2025-08-14T21:52:02.3510927Z 2025-08-14T21:52:02.3511032Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3511127Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3511221Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3511346Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3511442Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3511538Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3511641Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3511733Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3511861Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3512117Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3512201Z return mod(**inputs) 2025-08-14T21:52:02.3512532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3512620Z outputs = self.model( 2025-08-14T21:52:02.3512945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3513046Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3513365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3513459Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3513748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3513848Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3514186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3514331Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3514676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3514805Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3515171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3515344Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3515357Z 2025-08-14T21:52:02.3515509Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3515760Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3515850Z return mod(**inputs) 2025-08-14T21:52:02.3516192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3516277Z outputs = self.model( 2025-08-14T21:52:02.3516604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3516697Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3517025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3517116Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3517399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3517504Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3517821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3517948Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3518268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3518387Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3518785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3518919Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3518954Z 2025-08-14T21:52:02.3519053Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3519164Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3519260Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3519358Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3519449Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3519540Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3519637Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3519728Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3519856Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3520112Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3520195Z return mod(**inputs) 2025-08-14T21:52:02.3520517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3520610Z outputs = self.model( 2025-08-14T21:52:02.3520929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3521032Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3521453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3521542Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3521831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3521933Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3522268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3522402Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3522717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3522844Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3523241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3523407Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3523429Z 2025-08-14T21:52:02.3523555Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3523826Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3523918Z return mod(**inputs) 2025-08-14T21:52:02.3530569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3530660Z outputs = self.model( 2025-08-14T21:52:02.3530992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3531085Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3531413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3531502Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3531781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3531883Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3532225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3532361Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3532719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3532843Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3533238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3533389Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3533402Z 2025-08-14T21:52:02.3533498Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3533599Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3533736Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3533989Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3534078Z return mod(**inputs) 2025-08-14T21:52:02.3534400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3534485Z outputs = self.model( 2025-08-14T21:52:02.3534811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3534903Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3535225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3535322Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3535605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3535709Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3536028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:52:02.3536176Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3536452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3536536Z return self.act(input) 2025-08-14T21:52:02.3536550Z 2025-08-14T21:52:02.3536654Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3536750Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3536843Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3536964Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3537058Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3537152Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3537250Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3537346Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3537493Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3537748Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3537833Z return mod(**inputs) 2025-08-14T21:52:02.3538163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3538246Z outputs = self.model( 2025-08-14T21:52:02.3538568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3538666Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3539055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3539161Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3539476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3539582Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3539912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3540056Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3540373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3540520Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3540889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3541059Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3541072Z 2025-08-14T21:52:02.3541199Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3541450Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3541539Z return mod(**inputs) 2025-08-14T21:52:02.3541863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3541949Z outputs = self.model( 2025-08-14T21:52:02.3542274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3542370Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3542698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3542789Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3543067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3543174Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3543548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3543681Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3544000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3544119Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3544499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3544631Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3544666Z 2025-08-14T21:52:02.3544764Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3544869Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3544962Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3545062Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3545154Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3545267Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3545370Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3545464Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3545593Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3545849Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3545932Z return mod(**inputs) 2025-08-14T21:52:02.3546259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3546347Z outputs = self.model( 2025-08-14T21:52:02.3546669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3546773Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3547097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3547187Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3547477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3547597Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3547924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3548076Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3548396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3548523Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3549277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3549456Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3549469Z 2025-08-14T21:52:02.3549600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3549853Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3549946Z return mod(**inputs) 2025-08-14T21:52:02.3550269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3550359Z outputs = self.model( 2025-08-14T21:52:02.3550691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3550785Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3551126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3551219Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3551500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3551613Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3551936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3552071Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3552402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3552523Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3552952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3553086Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3553099Z 2025-08-14T21:52:02.3553195Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3557489Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3557619Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3557875Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3557959Z return mod(**inputs) 2025-08-14T21:52:02.3558279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3558371Z outputs = self.model( 2025-08-14T21:52:02.3558692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3558785Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3559111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3559202Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3559490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3559588Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3559939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:52:02.3560091Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3560399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3560489Z return self.act(input) 2025-08-14T21:52:02.3560507Z 2025-08-14T21:52:02.3560604Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3560698Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3560799Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3560889Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3560980Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3561076Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3561239Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3561349Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3561487Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3561734Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3561824Z return mod(**inputs) 2025-08-14T21:52:02.3562147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3562230Z outputs = self.model( 2025-08-14T21:52:02.3562557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3562649Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3562964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3563059Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3563335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3563440Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3563756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3563878Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3564205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3564351Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3564718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3564885Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3564898Z 2025-08-14T21:52:02.3565046Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3565304Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3565390Z return mod(**inputs) 2025-08-14T21:52:02.3565707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3565800Z outputs = self.model( 2025-08-14T21:52:02.3566116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3566217Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3566536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3566626Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3566916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3567015Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3567332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3567481Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3567875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3568023Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3568445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3568576Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3568589Z 2025-08-14T21:52:02.3568696Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3568791Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3568892Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3568985Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3569077Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3569177Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3569268Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3569360Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3569497Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3569745Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3569828Z return mod(**inputs) 2025-08-14T21:52:02.3570156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3570240Z outputs = self.model( 2025-08-14T21:52:02.3570569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3570661Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3570982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3571080Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3571360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3571461Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3571827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3571959Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3572312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3572446Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3572835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3573003Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3573017Z 2025-08-14T21:52:02.3573146Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3573402Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3573484Z return mod(**inputs) 2025-08-14T21:52:02.3573807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3573896Z outputs = self.model( 2025-08-14T21:52:02.3574214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3574313Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3574634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3574722Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3575028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3575128Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3575446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3575602Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3575924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3576049Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3576414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3576544Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3576557Z 2025-08-14T21:52:02.3576658Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3576753Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3576876Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3577128Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3577211Z return mod(**inputs) 2025-08-14T21:52:02.3577539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3577624Z outputs = self.model( 2025-08-14T21:52:02.3577940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3578039Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3578358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3578452Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3578730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3578829Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3579148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:52:02.3579294Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3579581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3579672Z return self.act(input) 2025-08-14T21:52:02.3579685Z 2025-08-14T21:52:02.3579780Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3579880Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3579973Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3580086Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3580187Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3580284Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3580378Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3580476Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3580602Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3580852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3580943Z return mod(**inputs) 2025-08-14T21:52:02.3581269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3581362Z outputs = self.model( 2025-08-14T21:52:02.3581682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3581772Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3582102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3586466Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3586760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3586860Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3587206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3587341Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3587661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3587779Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3588155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3588316Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3588331Z 2025-08-14T21:52:02.3588466Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3588715Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3588797Z return mod(**inputs) 2025-08-14T21:52:02.3589126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3589213Z outputs = self.model( 2025-08-14T21:52:02.3589539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3589631Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3589951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3590050Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3590328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3590429Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3590753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3590875Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3591224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3591340Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3591707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3591862Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3591875Z 2025-08-14T21:52:02.3591971Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3592067Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3592168Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3592264Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3592365Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3592459Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3592551Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3592647Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3592777Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3593025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3593114Z return mod(**inputs) 2025-08-14T21:52:02.3593435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3593527Z outputs = self.model( 2025-08-14T21:52:02.3593847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3593969Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3594295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3594406Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3594685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3594796Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3595114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3595251Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3595567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3595683Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3596059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3596218Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3596232Z 2025-08-14T21:52:02.3596364Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3596612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3596692Z return mod(**inputs) 2025-08-14T21:52:02.3597091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3597209Z outputs = self.model( 2025-08-14T21:52:02.3597540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3597641Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3597959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3598063Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3598343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3598443Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3598791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3598922Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3599247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3599361Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3599747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3599886Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3599899Z 2025-08-14T21:52:02.3599993Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3600088Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3600223Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3600472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3600559Z return mod(**inputs) 2025-08-14T21:52:02.3600878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3600964Z outputs = self.model( 2025-08-14T21:52:02.3601393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3601485Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3601802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3601925Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3602201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3602328Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3602646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:52:02.3602794Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3603067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3603151Z return self.act(input) 2025-08-14T21:52:02.3603164Z 2025-08-14T21:52:02.3603267Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3603360Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3603452Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3603556Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3603651Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3603742Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3603843Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3603935Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3604061Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3604321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3604401Z return mod(**inputs) 2025-08-14T21:52:02.3604731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3604818Z outputs = self.model( 2025-08-14T21:52:02.3605143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3605241Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3605613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3605703Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3605989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3606087Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3606438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3606560Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3606899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3607028Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3607393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3607565Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3607578Z 2025-08-14T21:52:02.3607708Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3607955Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3608048Z return mod(**inputs) 2025-08-14T21:52:02.3608369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3608457Z outputs = self.model( 2025-08-14T21:52:02.3608780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3608871Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3609194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3609303Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3609587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3609713Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3610030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3610158Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3610471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3610591Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3610967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3611096Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3611111Z 2025-08-14T21:52:02.3615423Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3615543Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3615641Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3615751Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3615853Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3615952Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3616059Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3616154Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3616286Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3616548Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3616632Z return mod(**inputs) 2025-08-14T21:52:02.3616956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3617057Z outputs = self.model( 2025-08-14T21:52:02.3617381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3617488Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3617806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3617921Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3618212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3618313Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3618691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3618823Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3619139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3619265Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3619630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3619790Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3619809Z 2025-08-14T21:52:02.3619936Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3620184Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3620270Z return mod(**inputs) 2025-08-14T21:52:02.3620589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3620672Z outputs = self.model( 2025-08-14T21:52:02.3620999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3621116Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3621439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3621548Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3621828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3621935Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3622253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3622383Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3622706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3622822Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3623201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3623328Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3623342Z 2025-08-14T21:52:02.3623438Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3623537Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3623666Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3623917Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3623997Z return mod(**inputs) 2025-08-14T21:52:02.3624318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3624408Z outputs = self.model( 2025-08-14T21:52:02.3624727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3624819Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3625142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3625234Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3625541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3625639Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3626036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:52:02.3626224Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3626533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3626624Z return self.act(input) 2025-08-14T21:52:02.3626639Z 2025-08-14T21:52:02.3626741Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3626834Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3626934Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3627033Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3627128Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3627232Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3627327Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3627420Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3627557Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3627805Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3627889Z return mod(**inputs) 2025-08-14T21:52:02.3628222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3628309Z outputs = self.model( 2025-08-14T21:52:02.3628661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3628756Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3629115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3629213Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3629492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3629599Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3629916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3630038Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3630361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3630481Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3630852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3631023Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3631035Z 2025-08-14T21:52:02.3631163Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3631417Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3631499Z return mod(**inputs) 2025-08-14T21:52:02.3631822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3631917Z outputs = self.model( 2025-08-14T21:52:02.3632236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3632336Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3632660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3632751Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3633040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3633163Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3633480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:52:02.3633607Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:52:02.3633944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3634067Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3634433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3634564Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3634579Z 2025-08-14T21:52:02.3634683Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3634781Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3634879Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3634971Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3635062Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3635158Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3635249Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3635341Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3635473Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3635721Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3635822Z return mod(**inputs) 2025-08-14T21:52:02.3636147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3636229Z outputs = self.model( 2025-08-14T21:52:02.3636575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3636664Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3636983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3637078Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3637357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3637457Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3637780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3637911Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3638235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3638355Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3638722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:52:02.3638884Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:52:02.3638896Z 2025-08-14T21:52:02.3639022Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3639275Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3639357Z return mod(**inputs) 2025-08-14T21:52:02.3639676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3639768Z outputs = self.model( 2025-08-14T21:52:02.3640088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3640181Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3649406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3649508Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3649894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3650002Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3650461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:52:02.3650618Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:52:02.3651056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:52:02.3651194Z attn_output, attn_weights = attention_interface( 2025-08-14T21:52:02.3651624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:52:02.3651755Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:02.3651768Z 2025-08-14T21:52:02.3651874Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3651969Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3652094Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3652350Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3652434Z return mod(**inputs) 2025-08-14T21:52:02.3652759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:52:02.3652877Z outputs = self.model( 2025-08-14T21:52:02.3653194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:52:02.3653325Z decoder_outputs = self.decoder( 2025-08-14T21:52:02.3653647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:52:02.3653740Z layer_outputs = decoder_layer( 2025-08-14T21:52:02.3654027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:02.3654130Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:02.3654453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:52:02.3654601Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:52:02.3656939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:02.3657039Z return self.act(input) 2025-08-14T21:52:02.3657051Z 2025-08-14T21:52:02.3657150Z cudagraph partition due to non gpu ops 2025-08-14T21:52:02.3657282Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3657531Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3657611Z return mod(**inputs) 2025-08-14T21:52:02.3657943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1456, in forward 2025-08-14T21:52:02.3658090Z lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias 2025-08-14T21:52:02.3658103Z 2025-08-14T21:52:02.3658229Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:02.3658488Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:02.3658574Z return mod(**inputs) 2025-08-14T21:52:02.3658904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1461, in forward 2025-08-14T21:52:02.3659115Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:52:02.3659128Z 2025-08-14T21:52:16.0835830Z Compilation time (from dynamo_timed): 41.573142691 2025-08-14T21:52:16.1057171Z pass 2025-08-14T21:52:16.1066812Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:52:16.1067913Z TIMING: _recursive_pre_grad_passes:0.13061 _recursive_joint_graph_passes:1.48309 _recursive_post_grad_passes:0.24601 async_compile.wait:0.8958 code_gen:10.52789 inductor_compile:17.37729 backend_compile:33.45376 gc:0.00036 entire_frame_compile:41.57314 total_wall_time:41.57314 2025-08-14T21:52:16.1069069Z STATS: call_* op count: 986 | FakeTensorMode.__torch_dispatch__:63787 | FakeTensor.__torch_dispatch__:9911 | ProxyTorchDispatchMode.__torch_dispatch__:14032 2025-08-14T21:52:16.1069703Z Dynamo produced 1 graphs covering 986 ops with 0 graph breaks (0 unique) 2025-08-14T21:52:22.9379390Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:52:22.9380479Z from pkg_resources import resource_filename 2025-08-14T21:52:23.7944963Z 2025-08-14T21:52:27.7946107Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:52:27.7946422Z loading model: 0it [00:03, ?it/s] 2025-08-14T21:52:27.7965909Z cpu eval MT5ForConditionalGeneration 2025-08-14T21:52:28.7246846Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:52:29.2209469Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:52:29.6953222Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:52:53.7618164Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7620032Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7620520Z return mod(**inputs) 2025-08-14T21:52:53.7621070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.7621680Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.7626522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7627021Z layer_outputs = layer_module( 2025-08-14T21:52:53.7627461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7627927Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7628415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7628913Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7629401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7629891Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7630374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 421, in forward 2025-08-14T21:52:53.7630912Z position_bias = position_bias + causal_mask 2025-08-14T21:52:53.7631165Z 2025-08-14T21:52:53.7631354Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7631916Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7632380Z return mod(**inputs) 2025-08-14T21:52:53.7632901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.7633517Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.7634040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7634603Z layer_outputs = layer_module( 2025-08-14T21:52:53.7635209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7635760Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7636355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7636982Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7637450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7637933Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7638402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.7638883Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.7639055Z 2025-08-14T21:52:53.7639192Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7639641Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7640043Z return mod(**inputs) 2025-08-14T21:52:53.7640496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.7641281Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.7641908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7642673Z layer_outputs = layer_module( 2025-08-14T21:52:53.7643281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7643803Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7644389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7644957Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7645508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7646091Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7646578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.7647072Z key_states = self.k(current_states) 2025-08-14T21:52:53.7647306Z 2025-08-14T21:52:53.7647463Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7647999Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7648539Z return mod(**inputs) 2025-08-14T21:52:53.7649418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.7649947Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.7650450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7655280Z layer_outputs = layer_module( 2025-08-14T21:52:53.7655747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7656296Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7656824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7657382Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7657982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7658601Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7659114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.7659774Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.7660044Z 2025-08-14T21:52:53.7660203Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7660731Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7661179Z return mod(**inputs) 2025-08-14T21:52:53.7661682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.7662211Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.7662801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7663329Z layer_outputs = layer_module( 2025-08-14T21:52:53.7663813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7664305Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7664909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7665434Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7665986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7666460Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7666922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.7667442Z value_states = self.v(current_states) 2025-08-14T21:52:53.7667618Z 2025-08-14T21:52:53.7667750Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7668196Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7668624Z return mod(**inputs) 2025-08-14T21:52:53.7669065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.7669601Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.7670065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7670521Z layer_outputs = layer_module( 2025-08-14T21:52:53.7670957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7671407Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7671908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7672454Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7673017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7673636Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7674256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.7674844Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.7675067Z 2025-08-14T21:52:53.7675238Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7675763Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7676296Z return mod(**inputs) 2025-08-14T21:52:53.7676804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.7677329Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.7677863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7678427Z layer_outputs = layer_module( 2025-08-14T21:52:53.7678960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7679472Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7680044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7689145Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7689882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7690481Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7691006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.7691668Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.7691923Z 2025-08-14T21:52:53.7692063Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7692591Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7693046Z return mod(**inputs) 2025-08-14T21:52:53.7693536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.7694127Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.7694711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7695185Z layer_outputs = layer_module( 2025-08-14T21:52:53.7695613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7696087Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7696554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7697054Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7697528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7698000Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7698537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.7699023Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.7699218Z 2025-08-14T21:52:53.7699369Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7699887Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7700392Z return mod(**inputs) 2025-08-14T21:52:53.7700940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7701452Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7702068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7702588Z layer_outputs = layer_module( 2025-08-14T21:52:53.7703082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7703571Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7704118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7704675Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7705222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7705796Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7706377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.7706975Z value_states = self.v(current_states) 2025-08-14T21:52:53.7707222Z 2025-08-14T21:52:53.7707394Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7707885Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7708353Z return mod(**inputs) 2025-08-14T21:52:53.7708870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7709576Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7710142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7710715Z layer_outputs = layer_module( 2025-08-14T21:52:53.7711268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7711807Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7712424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7712982Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7713577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7714162Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7714751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.7715327Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.7715567Z 2025-08-14T21:52:53.7715732Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7716246Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7716752Z return mod(**inputs) 2025-08-14T21:52:53.7717211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7717751Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7718343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7718842Z layer_outputs = layer_module( 2025-08-14T21:52:53.7719303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7719917Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7720530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7721239Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7721819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7722483Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7723049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.7729947Z key_states = self.k(current_states) 2025-08-14T21:52:53.7730133Z 2025-08-14T21:52:53.7730297Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7730823Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7731273Z return mod(**inputs) 2025-08-14T21:52:53.7731845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7732380Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7732943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7733466Z layer_outputs = layer_module( 2025-08-14T21:52:53.7734021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7734605Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7735160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7735650Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7736114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7736604Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7751696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.7752318Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.7752590Z 2025-08-14T21:52:53.7756931Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7757394Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7757822Z return mod(**inputs) 2025-08-14T21:52:53.7758286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7758766Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7759243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7759717Z layer_outputs = layer_module( 2025-08-14T21:52:53.7760166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7760798Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7761382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7761869Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7762380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7762872Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7763356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.7763870Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.7764080Z 2025-08-14T21:52:53.7764216Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7764671Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7765081Z return mod(**inputs) 2025-08-14T21:52:53.7765528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7765998Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7766464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7766993Z layer_outputs = layer_module( 2025-08-14T21:52:53.7767499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7767958Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7768436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7768963Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7769428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7769915Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7770388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.7770904Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.7771109Z 2025-08-14T21:52:53.7771242Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7771746Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7772155Z return mod(**inputs) 2025-08-14T21:52:53.7772586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7773055Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7773548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7774018Z layer_outputs = layer_module( 2025-08-14T21:52:53.7774443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7774901Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7775378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7775849Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7776324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7776799Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7777270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.7777733Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.7777908Z 2025-08-14T21:52:53.7778040Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7778520Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7778927Z return mod(**inputs) 2025-08-14T21:52:53.7779358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.7779859Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.7780321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7780781Z layer_outputs = layer_module( 2025-08-14T21:52:53.7781226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7785949Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7786434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.7786905Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.7787385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.7787873Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.7788345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.7788816Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.7788994Z 2025-08-14T21:52:53.7789130Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7789584Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7789983Z return mod(**inputs) 2025-08-14T21:52:53.7790429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7790910Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7791364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7791842Z layer_outputs = layer_module( 2025-08-14T21:52:53.7792273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7792732Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7793228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.7793721Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.7794208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.7794729Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.7795261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.7795789Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.7795996Z 2025-08-14T21:52:53.7796204Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7796651Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7797063Z return mod(**inputs) 2025-08-14T21:52:53.7797505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7797982Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7798431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7798901Z layer_outputs = layer_module( 2025-08-14T21:52:53.7799332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7799781Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7800325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.7800814Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.7801367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.7801916Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.7802423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.7802891Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.7803060Z 2025-08-14T21:52:53.7803196Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7803633Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7804037Z return mod(**inputs) 2025-08-14T21:52:53.7804474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7804942Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7805392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7805855Z layer_outputs = layer_module( 2025-08-14T21:52:53.7806280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7806718Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7807182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.7807668Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.7808145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.7808651Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.7809164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.7809644Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.7809828Z 2025-08-14T21:52:53.7809968Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7810486Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7815120Z return mod(**inputs) 2025-08-14T21:52:53.7815565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7816032Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7816527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7816999Z layer_outputs = layer_module( 2025-08-14T21:52:53.7817437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7817884Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7818358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.7818852Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.7819327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.7819851Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.7820363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.7820836Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.7821007Z 2025-08-14T21:52:53.7821140Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7821590Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7822027Z return mod(**inputs) 2025-08-14T21:52:53.7822461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7822951Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7823409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7823882Z layer_outputs = layer_module( 2025-08-14T21:52:53.7824308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7824797Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7825357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7825834Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7826298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7826777Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7827259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.7827722Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.7827905Z 2025-08-14T21:52:53.7828037Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7828478Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7828882Z return mod(**inputs) 2025-08-14T21:52:53.7829354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7829822Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7830281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7830743Z layer_outputs = layer_module( 2025-08-14T21:52:53.7831168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7831622Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7832117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7832581Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7833058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7833529Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7834009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.7834471Z key_states = self.k(current_states) 2025-08-14T21:52:53.7834637Z 2025-08-14T21:52:53.7834773Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7835208Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7835605Z return mod(**inputs) 2025-08-14T21:52:53.7836029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7836495Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7836943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7837401Z layer_outputs = layer_module( 2025-08-14T21:52:53.7837825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7838262Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7838721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7839229Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7848208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7849169Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7849647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.7850181Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.7850413Z 2025-08-14T21:52:53.7850551Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7850992Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7851395Z return mod(**inputs) 2025-08-14T21:52:53.7851829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7852289Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7852741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7853215Z layer_outputs = layer_module( 2025-08-14T21:52:53.7853635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7856178Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7856645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7857122Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7857588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7858111Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7858581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.7859053Z value_states = self.v(current_states) 2025-08-14T21:52:53.7859221Z 2025-08-14T21:52:53.7859351Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7859794Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7860199Z return mod(**inputs) 2025-08-14T21:52:53.7860705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7861171Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7861630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7862129Z layer_outputs = layer_module( 2025-08-14T21:52:53.7862552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7863001Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7863470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7863954Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7864420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7864896Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7865368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.7865868Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.7866075Z 2025-08-14T21:52:53.7866205Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7866647Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7867085Z return mod(**inputs) 2025-08-14T21:52:53.7867510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7867971Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7868517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7869056Z layer_outputs = layer_module( 2025-08-14T21:52:53.7869481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7869927Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7870392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7870857Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7871324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7871793Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7872258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.7872758Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.7872969Z 2025-08-14T21:52:53.7873095Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7873542Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7873941Z return mod(**inputs) 2025-08-14T21:52:53.7874371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7874838Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7875288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7875742Z layer_outputs = layer_module( 2025-08-14T21:52:53.7876163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7876608Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7877070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7877565Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7878033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7878499Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7878983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.7879447Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.7879613Z 2025-08-14T21:52:53.7879739Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7880184Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7880576Z return mod(**inputs) 2025-08-14T21:52:53.7881010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7881550Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7882005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7882475Z layer_outputs = layer_module( 2025-08-14T21:52:53.7882957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7887643Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7888106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.7888630Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.7889111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.7889657Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.7890158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.7890652Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.7890840Z 2025-08-14T21:52:53.7890979Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7891469Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7891870Z return mod(**inputs) 2025-08-14T21:52:53.7892308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7892773Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7893226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7893683Z layer_outputs = layer_module( 2025-08-14T21:52:53.7894109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7894551Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7895016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.7895497Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.7895974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.7896475Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.7896980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.7897497Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.7897736Z 2025-08-14T21:52:53.7897868Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7898306Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7898706Z return mod(**inputs) 2025-08-14T21:52:53.7899163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7899622Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7900073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7900534Z layer_outputs = layer_module( 2025-08-14T21:52:53.7900982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7901426Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7901892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.7902378Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.7902855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.7903373Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.7903882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.7904354Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.7904531Z 2025-08-14T21:52:53.7904659Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7905102Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7905501Z return mod(**inputs) 2025-08-14T21:52:53.7905954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7906411Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7906891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7907356Z layer_outputs = layer_module( 2025-08-14T21:52:53.7907774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7908220Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7908686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.7909170Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.7909644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.7910157Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.7910667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.7911132Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.7911304Z 2025-08-14T21:52:53.7911434Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7911933Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7916576Z return mod(**inputs) 2025-08-14T21:52:53.7916997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7917471Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7917930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7918408Z layer_outputs = layer_module( 2025-08-14T21:52:53.7918832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7919274Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7919735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7920202Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7920696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7921259Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7921732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.7922213Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.7922384Z 2025-08-14T21:52:53.7922512Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7922959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7923354Z return mod(**inputs) 2025-08-14T21:52:53.7923787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7924251Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7924707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7925162Z layer_outputs = layer_module( 2025-08-14T21:52:53.7925583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7926025Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7926535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7927079Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7927588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7928060Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7928545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.7929012Z key_states = self.k(current_states) 2025-08-14T21:52:53.7929186Z 2025-08-14T21:52:53.7929318Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7929765Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7930164Z return mod(**inputs) 2025-08-14T21:52:53.7930603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7931073Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7931532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7932005Z layer_outputs = layer_module( 2025-08-14T21:52:53.7932448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7932952Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7933414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7933888Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7934360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7934826Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7935299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.7935837Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.7936066Z 2025-08-14T21:52:53.7936203Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7936644Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7937052Z return mod(**inputs) 2025-08-14T21:52:53.7937514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7937984Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7938435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7938896Z layer_outputs = layer_module( 2025-08-14T21:52:53.7939348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7939792Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7940261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7940751Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7945490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7945960Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7946426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.7946895Z value_states = self.v(current_states) 2025-08-14T21:52:53.7947061Z 2025-08-14T21:52:53.7947198Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7947636Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7948037Z return mod(**inputs) 2025-08-14T21:52:53.7948467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7949279Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7949740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7950253Z layer_outputs = layer_module( 2025-08-14T21:52:53.7950679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7951117Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7951578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7952053Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7952516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7952987Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7953455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.7953965Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.7954172Z 2025-08-14T21:52:53.7954301Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7954746Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7955148Z return mod(**inputs) 2025-08-14T21:52:53.7955692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7956167Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7956629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7957090Z layer_outputs = layer_module( 2025-08-14T21:52:53.7957512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7957962Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7958429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7958907Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7959465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7959943Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7960408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.7960906Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.7961266Z 2025-08-14T21:52:53.7961400Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7961843Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7962253Z return mod(**inputs) 2025-08-14T21:52:53.7962684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7963168Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7963635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7964099Z layer_outputs = layer_module( 2025-08-14T21:52:53.7964537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7964991Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7965466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.7965938Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.7966402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.7966917Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.7967383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.7967867Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.7968033Z 2025-08-14T21:52:53.7968167Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7968614Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7969002Z return mod(**inputs) 2025-08-14T21:52:53.7969431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7969956Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7978780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7979396Z layer_outputs = layer_module( 2025-08-14T21:52:53.7979955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7980548Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7981163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.7981654Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.7982135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.7982653Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.7983161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.7983660Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.7983853Z 2025-08-14T21:52:53.7983989Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7984480Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7984944Z return mod(**inputs) 2025-08-14T21:52:53.7985379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7985873Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7986329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7986796Z layer_outputs = layer_module( 2025-08-14T21:52:53.7987246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7987700Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7988163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.7988696Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.7989177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.7989691Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.7990204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.7990679Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.7990852Z 2025-08-14T21:52:53.7990987Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7991422Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7991826Z return mod(**inputs) 2025-08-14T21:52:53.7992259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7992762Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.7993216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.7993709Z layer_outputs = layer_module( 2025-08-14T21:52:53.7994140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.7994580Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.7995046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.7995525Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.7996003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.7996505Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.7997017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.7997499Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.7997676Z 2025-08-14T21:52:53.7997811Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.7998243Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.7998644Z return mod(**inputs) 2025-08-14T21:52:53.7999191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.7999656Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8000111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8000570Z layer_outputs = layer_module( 2025-08-14T21:52:53.8000995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8001538Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8002002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8002485Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8002984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8003495Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8004008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8004477Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8004649Z 2025-08-14T21:52:53.8004816Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8005260Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8005661Z return mod(**inputs) 2025-08-14T21:52:53.8006084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8006553Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8007009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8007476Z layer_outputs = layer_module( 2025-08-14T21:52:53.8007896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8008351Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8008822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8009293Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8009754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8010243Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8010711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8011190Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8011367Z 2025-08-14T21:52:53.8011498Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8011943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8012342Z return mod(**inputs) 2025-08-14T21:52:53.8012766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8013252Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8020110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8020573Z layer_outputs = layer_module( 2025-08-14T21:52:53.8021006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8021461Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8021933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8022406Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8022877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8023357Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8023827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8024284Z key_states = self.k(current_states) 2025-08-14T21:52:53.8024452Z 2025-08-14T21:52:53.8024583Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8025028Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8025420Z return mod(**inputs) 2025-08-14T21:52:53.8025850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8026311Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8026791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8027246Z layer_outputs = layer_module( 2025-08-14T21:52:53.8027670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8028258Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8028726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8029203Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8029679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8030155Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8030620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8031154Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8031385Z 2025-08-14T21:52:53.8031519Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8031960Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8032352Z return mod(**inputs) 2025-08-14T21:52:53.8032790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8033277Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8033729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8034192Z layer_outputs = layer_module( 2025-08-14T21:52:53.8034642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8035082Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8035539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8036010Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8036475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8036945Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8037417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8037894Z value_states = self.v(current_states) 2025-08-14T21:52:53.8038062Z 2025-08-14T21:52:53.8038699Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8039140Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8039544Z return mod(**inputs) 2025-08-14T21:52:53.8039977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8040436Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8040890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8041453Z layer_outputs = layer_module( 2025-08-14T21:52:53.8041888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8042387Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8047094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8047569Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8048040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8048504Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8049329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8049845Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8050050Z 2025-08-14T21:52:53.8050187Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8050734Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8051213Z return mod(**inputs) 2025-08-14T21:52:53.8051650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8052109Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8052568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8053030Z layer_outputs = layer_module( 2025-08-14T21:52:53.8053458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8053897Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8054360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8054830Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8055294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8055805Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8056269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8056840Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8057045Z 2025-08-14T21:52:53.8057252Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8057699Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8058100Z return mod(**inputs) 2025-08-14T21:52:53.8058534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8059050Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8059515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8059976Z layer_outputs = layer_module( 2025-08-14T21:52:53.8060396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8060850Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8061324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8061799Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8062263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8062740Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8063206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8063673Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8063843Z 2025-08-14T21:52:53.8063975Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8064420Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8064820Z return mod(**inputs) 2025-08-14T21:52:53.8065244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8065714Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8066197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8066663Z layer_outputs = layer_module( 2025-08-14T21:52:53.8067083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8067533Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8068032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8068506Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8068978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 485, in forward 2025-08-14T21:52:53.8069509Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:52:53.8069750Z 2025-08-14T21:52:53.8069884Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8070319Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8070717Z return mod(**inputs) 2025-08-14T21:52:53.8071146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8075848Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8076312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8076775Z layer_outputs = layer_module( 2025-08-14T21:52:53.8077199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8077672Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8078141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8078663Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8079139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8079658Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8080166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8080661Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8080854Z 2025-08-14T21:52:53.8080984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8081491Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8081896Z return mod(**inputs) 2025-08-14T21:52:53.8082326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8082791Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8083248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8083715Z layer_outputs = layer_module( 2025-08-14T21:52:53.8084133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8084586Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8085053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8085541Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8086068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8086660Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8087175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8087645Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8087859Z 2025-08-14T21:52:53.8087988Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8088435Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8088839Z return mod(**inputs) 2025-08-14T21:52:53.8089284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8089753Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8090258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8090722Z layer_outputs = layer_module( 2025-08-14T21:52:53.8091147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8091602Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8092075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8092554Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8093033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8093553Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8094069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8094569Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8094756Z 2025-08-14T21:52:53.8094886Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8095333Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8095768Z return mod(**inputs) 2025-08-14T21:52:53.8096194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8096661Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8097122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8097582Z layer_outputs = layer_module( 2025-08-14T21:52:53.8098012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8098467Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8098939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8099413Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8099892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8100462Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8105214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8105690Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8105862Z 2025-08-14T21:52:53.8105993Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8106438Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8106834Z return mod(**inputs) 2025-08-14T21:52:53.8107268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8107746Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8108204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8108662Z layer_outputs = layer_module( 2025-08-14T21:52:53.8109116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8109563Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8110019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8110502Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8110998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8111476Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8111948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8112423Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8112593Z 2025-08-14T21:52:53.8112730Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8113165Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8113570Z return mod(**inputs) 2025-08-14T21:52:53.8114007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8114471Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8114976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8115517Z layer_outputs = layer_module( 2025-08-14T21:52:53.8115949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8116422Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8116881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8117376Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8117850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8118316Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8118786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8119307Z key_states = self.k(current_states) 2025-08-14T21:52:53.8119477Z 2025-08-14T21:52:53.8119615Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8120054Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8120465Z return mod(**inputs) 2025-08-14T21:52:53.8120895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8121459Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8121922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8122388Z layer_outputs = layer_module( 2025-08-14T21:52:53.8122823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8123265Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8123736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8124215Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8124679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8125158Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8125627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8126163Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8126391Z 2025-08-14T21:52:53.8126546Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8126993Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8127389Z return mod(**inputs) 2025-08-14T21:52:53.8127819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8128304Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8128761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8129237Z layer_outputs = layer_module( 2025-08-14T21:52:53.8138129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8138726Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8139343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8139976Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8140588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8141215Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8141689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8142165Z value_states = self.v(current_states) 2025-08-14T21:52:53.8142335Z 2025-08-14T21:52:53.8142497Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8142940Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8143365Z return mod(**inputs) 2025-08-14T21:52:53.8143833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8146452Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8146915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8147379Z layer_outputs = layer_module( 2025-08-14T21:52:53.8147802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8148309Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8167115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8167641Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8168137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8168626Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8169099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8169605Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8169811Z 2025-08-14T21:52:53.8169945Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8170393Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8170792Z return mod(**inputs) 2025-08-14T21:52:53.8171228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8171694Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8172152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8172608Z layer_outputs = layer_module( 2025-08-14T21:52:53.8177325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8177902Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8178368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8178834Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8179332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8179803Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8180262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8180767Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8180969Z 2025-08-14T21:52:53.8181100Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8181544Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8181939Z return mod(**inputs) 2025-08-14T21:52:53.8182375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8182838Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8183287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8183746Z layer_outputs = layer_module( 2025-08-14T21:52:53.8184172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8184667Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8185122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8185617Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8186076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8186538Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8186997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8187518Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8187760Z 2025-08-14T21:52:53.8187891Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8188324Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8188732Z return mod(**inputs) 2025-08-14T21:52:53.8189169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8189632Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8190092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8190555Z layer_outputs = layer_module( 2025-08-14T21:52:53.8190987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8191438Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8191908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8192393Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8192875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8193392Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8193908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8194402Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8194594Z 2025-08-14T21:52:53.8194723Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8195202Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8195611Z return mod(**inputs) 2025-08-14T21:52:53.8196047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8196509Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8196996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8197470Z layer_outputs = layer_module( 2025-08-14T21:52:53.8197891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8198341Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8198819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8199305Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8199780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8200293Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8200813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8201388Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8201563Z 2025-08-14T21:52:53.8201699Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8206463Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8206873Z return mod(**inputs) 2025-08-14T21:52:53.8207337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8207814Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8208286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8208759Z layer_outputs = layer_module( 2025-08-14T21:52:53.8209186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8209643Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8210170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8210654Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8211144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8211660Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8212175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8212646Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8212837Z 2025-08-14T21:52:53.8212965Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8213409Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8213814Z return mod(**inputs) 2025-08-14T21:52:53.8214241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8214713Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8215184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8215647Z layer_outputs = layer_module( 2025-08-14T21:52:53.8216076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8216573Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8217143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8217625Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8218107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8218701Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8219216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8219682Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8219861Z 2025-08-14T21:52:53.8219991Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8220440Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8220840Z return mod(**inputs) 2025-08-14T21:52:53.8221280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8221753Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8222212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8222672Z layer_outputs = layer_module( 2025-08-14T21:52:53.8223104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8223582Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8224044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8224517Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8225012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8225492Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8225955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8226422Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8226592Z 2025-08-14T21:52:53.8226731Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8227177Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8227575Z return mod(**inputs) 2025-08-14T21:52:53.8228018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8228486Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8228938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8229406Z layer_outputs = layer_module( 2025-08-14T21:52:53.8229840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8230291Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8230770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8235513Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8235995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8236475Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8236939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8237417Z key_states = self.k(current_states) 2025-08-14T21:52:53.8237589Z 2025-08-14T21:52:53.8237731Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8238200Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8238616Z return mod(**inputs) 2025-08-14T21:52:53.8239055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8239519Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8239991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8240453Z layer_outputs = layer_module( 2025-08-14T21:52:53.8240878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8241415Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8241882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8242352Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8242819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8243284Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8243748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8244282Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8244514Z 2025-08-14T21:52:53.8244653Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8245119Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8245567Z return mod(**inputs) 2025-08-14T21:52:53.8246074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8246559Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8247018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8247481Z layer_outputs = layer_module( 2025-08-14T21:52:53.8247764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8247873Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8248175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8248275Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8248588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8248991Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8249312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8249433Z value_states = self.v(current_states) 2025-08-14T21:52:53.8249450Z 2025-08-14T21:52:53.8249663Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8249959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8250044Z return mod(**inputs) 2025-08-14T21:52:53.8250351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8250448Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8250750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8250848Z layer_outputs = layer_module( 2025-08-14T21:52:53.8251133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8251234Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8251572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8251677Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8251985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8252085Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8252417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8252559Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8252574Z 2025-08-14T21:52:53.8252701Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8252950Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8253045Z return mod(**inputs) 2025-08-14T21:52:53.8253346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8253445Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8253752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8253844Z layer_outputs = layer_module( 2025-08-14T21:52:53.8254131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8254228Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8254525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8254685Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8254981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8255117Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8255415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8255550Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8255563Z 2025-08-14T21:52:53.8255699Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8255946Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8256040Z return mod(**inputs) 2025-08-14T21:52:53.8256345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8256441Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8256758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8256851Z layer_outputs = layer_module( 2025-08-14T21:52:53.8257135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8257258Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8257558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8257669Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8257970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8258070Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8258376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8258474Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8258486Z 2025-08-14T21:52:53.8258620Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8258867Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8258973Z return mod(**inputs) 2025-08-14T21:52:53.8259282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8259372Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8259670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8259854Z layer_outputs = layer_module( 2025-08-14T21:52:53.8264306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8264420Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8264718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8264819Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8265128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 485, in forward 2025-08-14T21:52:53.8265295Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:52:53.8265308Z 2025-08-14T21:52:53.8265441Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8265688Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8265772Z return mod(**inputs) 2025-08-14T21:52:53.8266086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8266201Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8266504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8266624Z layer_outputs = layer_module( 2025-08-14T21:52:53.8266901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8267005Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8267301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8267424Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8267721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8267872Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8268170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8268301Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8268313Z 2025-08-14T21:52:53.8268444Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8268692Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8268781Z return mod(**inputs) 2025-08-14T21:52:53.8269083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8269173Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8269482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8269570Z layer_outputs = layer_module( 2025-08-14T21:52:53.8269853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8269952Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8270250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8270372Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8270690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8270840Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8271137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8271235Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8271247Z 2025-08-14T21:52:53.8271405Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8271652Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8271733Z return mod(**inputs) 2025-08-14T21:52:53.8272041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8272133Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8272439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8272528Z layer_outputs = layer_module( 2025-08-14T21:52:53.8272808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8272910Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8273211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8273321Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8273627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8273802Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8274107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8274254Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8274272Z 2025-08-14T21:52:53.8274423Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8274748Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8274829Z return mod(**inputs) 2025-08-14T21:52:53.8275136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8275232Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8275532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8275627Z layer_outputs = layer_module( 2025-08-14T21:52:53.8275908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8276006Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8276310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8276419Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8276724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8276863Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8277163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8277267Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8277282Z 2025-08-14T21:52:53.8277407Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8277660Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8277742Z return mod(**inputs) 2025-08-14T21:52:53.8278043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8278167Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8278525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8278614Z layer_outputs = layer_module( 2025-08-14T21:52:53.8278905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8279025Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8279334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8279436Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8279732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8279846Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8280144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8280249Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8280262Z 2025-08-14T21:52:53.8280392Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8280641Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8280731Z return mod(**inputs) 2025-08-14T21:52:53.8281034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8281217Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8281551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8281640Z layer_outputs = layer_module( 2025-08-14T21:52:53.8281957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8282059Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8282360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8282467Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8282766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8282869Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8283182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8283282Z key_states = self.k(current_states) 2025-08-14T21:52:53.8283295Z 2025-08-14T21:52:53.8283431Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8283680Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8283761Z return mod(**inputs) 2025-08-14T21:52:53.8284071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8284164Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8284474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8284563Z layer_outputs = layer_module( 2025-08-14T21:52:53.8284845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8284953Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8285252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8285352Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8285661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8285800Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8286108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8286271Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8286284Z 2025-08-14T21:52:53.8286411Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8286688Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8286769Z return mod(**inputs) 2025-08-14T21:52:53.8287082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8287171Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8287474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8287567Z layer_outputs = layer_module( 2025-08-14T21:52:53.8287847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8287943Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8288250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8288353Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8288659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8288836Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8297531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8297671Z value_states = self.v(current_states) 2025-08-14T21:52:53.8297685Z 2025-08-14T21:52:53.8297832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8298155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8298249Z return mod(**inputs) 2025-08-14T21:52:53.8298656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8298755Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8299166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8299258Z layer_outputs = layer_module( 2025-08-14T21:52:53.8299652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8299761Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8300148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8300246Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8300547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8300654Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8300953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8301090Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8301102Z 2025-08-14T21:52:53.8301244Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8301492Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8301585Z return mod(**inputs) 2025-08-14T21:52:53.8301886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8301977Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8302306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8302394Z layer_outputs = layer_module( 2025-08-14T21:52:53.8302671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8302776Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8303096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8303211Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8303558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8305796Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8306105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8306241Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8306254Z 2025-08-14T21:52:53.8306388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8306633Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8306714Z return mod(**inputs) 2025-08-14T21:52:53.8307030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8307120Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8307499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8307599Z layer_outputs = layer_module( 2025-08-14T21:52:53.8307877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8308008Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8308313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8308411Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8308717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8308819Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8309123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8309220Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8309232Z 2025-08-14T21:52:53.8309359Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8309609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8309692Z return mod(**inputs) 2025-08-14T21:52:53.8309998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8310093Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8310394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8310488Z layer_outputs = layer_module( 2025-08-14T21:52:53.8310771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8310868Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8311173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8311290Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8311589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8311744Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8312071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8312204Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8312216Z 2025-08-14T21:52:53.8312343Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8312617Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8312708Z return mod(**inputs) 2025-08-14T21:52:53.8313011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8313109Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8313416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8313506Z layer_outputs = layer_module( 2025-08-14T21:52:53.8313790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8313888Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8314195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8314317Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8314616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8314769Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8315099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8315200Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8315235Z 2025-08-14T21:52:53.8315371Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8315619Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8315709Z return mod(**inputs) 2025-08-14T21:52:53.8316015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8316106Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8316421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8316510Z layer_outputs = layer_module( 2025-08-14T21:52:53.8316791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8316895Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8317194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8317314Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8317616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8317785Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8318179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8318291Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8318303Z 2025-08-14T21:52:53.8318440Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8318687Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8318772Z return mod(**inputs) 2025-08-14T21:52:53.8319080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8319172Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8319497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8319593Z layer_outputs = layer_module( 2025-08-14T21:52:53.8319872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8319976Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8320297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8320406Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8320717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8320855Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8321214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8321338Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8321353Z 2025-08-14T21:52:53.8321478Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8321729Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8321808Z return mod(**inputs) 2025-08-14T21:52:53.8322113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8322208Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8322511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8322627Z layer_outputs = layer_module( 2025-08-14T21:52:53.8322909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8323042Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8323351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8323452Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8323749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8323856Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8324156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8324258Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8324272Z 2025-08-14T21:52:53.8324401Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8324648Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8324735Z return mod(**inputs) 2025-08-14T21:52:53.8325039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8325137Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8325442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8325531Z layer_outputs = layer_module( 2025-08-14T21:52:53.8325818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8325915Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8326213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8326328Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8326625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8326738Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8327062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8327157Z key_states = self.k(current_states) 2025-08-14T21:52:53.8327170Z 2025-08-14T21:52:53.8327306Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8327554Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8327660Z return mod(**inputs) 2025-08-14T21:52:53.8327974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8328065Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8328376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8328469Z layer_outputs = layer_module( 2025-08-14T21:52:53.8328747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8328854Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8329152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8329257Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8329559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8329662Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8329968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8330151Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8330164Z 2025-08-14T21:52:53.8330779Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8331037Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8331120Z return mod(**inputs) 2025-08-14T21:52:53.8331432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8331525Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8331827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8331926Z layer_outputs = layer_module( 2025-08-14T21:52:53.8332212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8332363Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8336916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8337023Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8337342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8337448Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8337751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8337854Z value_states = self.v(current_states) 2025-08-14T21:52:53.8337866Z 2025-08-14T21:52:53.8337998Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8338253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8338339Z return mod(**inputs) 2025-08-14T21:52:53.8338645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8338746Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8339056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8339180Z layer_outputs = layer_module( 2025-08-14T21:52:53.8339469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8339569Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8339876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8340004Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8340304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8340412Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8340709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8340850Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8340863Z 2025-08-14T21:52:53.8340986Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8341233Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8341319Z return mod(**inputs) 2025-08-14T21:52:53.8341623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8341712Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8342020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8342153Z layer_outputs = layer_module( 2025-08-14T21:52:53.8342434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8342552Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8342855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8342960Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8343257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8343361Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8343659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8343791Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8343803Z 2025-08-14T21:52:53.8343934Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8344179Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8344258Z return mod(**inputs) 2025-08-14T21:52:53.8344568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8344656Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8344968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8345055Z layer_outputs = layer_module( 2025-08-14T21:52:53.8345332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8345436Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8345736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8345835Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8346140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8346241Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8346546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8346664Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8346677Z 2025-08-14T21:52:53.8346857Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8347185Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8347266Z return mod(**inputs) 2025-08-14T21:52:53.8347598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8347687Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8347996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8348091Z layer_outputs = layer_module( 2025-08-14T21:52:53.8348374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8348469Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8349091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8349193Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8349504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 485, in forward 2025-08-14T21:52:53.8349670Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:52:53.8349683Z 2025-08-14T21:52:53.8349809Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8350102Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8350183Z return mod(**inputs) 2025-08-14T21:52:53.8350497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8350619Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8350926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8351022Z layer_outputs = layer_module( 2025-08-14T21:52:53.8351301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8351399Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8351712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8351823Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8352130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8352278Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8352580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8352707Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8352719Z 2025-08-14T21:52:53.8352842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8353095Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8353175Z return mod(**inputs) 2025-08-14T21:52:53.8353478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8353577Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8353881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8353968Z layer_outputs = layer_module( 2025-08-14T21:52:53.8354259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8354357Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8354712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8354823Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8355121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8355301Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8355602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8355703Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8355723Z 2025-08-14T21:52:53.8355850Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8356102Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8356194Z return mod(**inputs) 2025-08-14T21:52:53.8356500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8356588Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8356903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8357001Z layer_outputs = layer_module( 2025-08-14T21:52:53.8357296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8357397Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8357739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8357854Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8358180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8358322Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8358631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8358738Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8358751Z 2025-08-14T21:52:53.8358885Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8359137Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8359223Z return mod(**inputs) 2025-08-14T21:52:53.8359537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:52:53.8359626Z encoder_outputs = self.encoder( 2025-08-14T21:52:53.8359937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8360027Z layer_outputs = layer_module( 2025-08-14T21:52:53.8360304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8360413Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8360714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8360825Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8361198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8361394Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8365864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8365966Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8365979Z 2025-08-14T21:52:53.8366105Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8366386Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8366470Z return mod(**inputs) 2025-08-14T21:52:53.8366774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8366874Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8367198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8367295Z layer_outputs = layer_module( 2025-08-14T21:52:53.8367576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8367673Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8367981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8368081Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8368390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8368499Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8368798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8368898Z key_states = self.k(current_states) 2025-08-14T21:52:53.8368911Z 2025-08-14T21:52:53.8369036Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8369307Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8369399Z return mod(**inputs) 2025-08-14T21:52:53.8369750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8369868Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8370174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8370264Z layer_outputs = layer_module( 2025-08-14T21:52:53.8370549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8370643Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8370952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8371055Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8371351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8371462Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8371762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8371922Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8371935Z 2025-08-14T21:52:53.8372072Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8372315Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8372401Z return mod(**inputs) 2025-08-14T21:52:53.8372707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8372800Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8373111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8373200Z layer_outputs = layer_module( 2025-08-14T21:52:53.8373477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8373581Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8373902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8374007Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8374309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8374412Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8374741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8374838Z value_states = self.v(current_states) 2025-08-14T21:52:53.8374851Z 2025-08-14T21:52:53.8374984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8375230Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8375313Z return mod(**inputs) 2025-08-14T21:52:53.8375630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8375731Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8376069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8376244Z layer_outputs = layer_module( 2025-08-14T21:52:53.8376524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8376630Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8376960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8377058Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8377364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8377488Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8377797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8377974Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8377987Z 2025-08-14T21:52:53.8378112Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8378366Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8378447Z return mod(**inputs) 2025-08-14T21:52:53.8378747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8378851Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8379156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8379252Z layer_outputs = layer_module( 2025-08-14T21:52:53.8379531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8379627Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8379934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8380035Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8380336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8380445Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8380747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8380891Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8380905Z 2025-08-14T21:52:53.8381029Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8381298Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8381385Z return mod(**inputs) 2025-08-14T21:52:53.8381691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8381788Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8382109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8382197Z layer_outputs = layer_module( 2025-08-14T21:52:53.8382480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8382575Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8382873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8382978Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8383278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8383386Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8383684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8383777Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8383792Z 2025-08-14T21:52:53.8383924Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8384167Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8384281Z return mod(**inputs) 2025-08-14T21:52:53.8384581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8384692Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8384996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8385085Z layer_outputs = layer_module( 2025-08-14T21:52:53.8385362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8385463Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8385762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8385880Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8386177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8386320Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8386624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8386748Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8386761Z 2025-08-14T21:52:53.8386892Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8387139Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8387218Z return mod(**inputs) 2025-08-14T21:52:53.8387529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8387619Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8387918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8388012Z layer_outputs = layer_module( 2025-08-14T21:52:53.8388288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8388390Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8388710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8388819Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8389120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8389261Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8389589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8389693Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8389707Z 2025-08-14T21:52:53.8389834Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8390087Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8390170Z return mod(**inputs) 2025-08-14T21:52:53.8390531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8394868Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8395174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8395271Z layer_outputs = layer_module( 2025-08-14T21:52:53.8395554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8395654Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8395961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8396103Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8396403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8396571Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8396874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8396992Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8397005Z 2025-08-14T21:52:53.8397130Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8397377Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8397469Z return mod(**inputs) 2025-08-14T21:52:53.8397772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8397872Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8398174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8398265Z layer_outputs = layer_module( 2025-08-14T21:52:53.8398552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8398649Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8398946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8399068Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8399366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8399513Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8399818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8399915Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8399929Z 2025-08-14T21:52:53.8400060Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8400305Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8400409Z return mod(**inputs) 2025-08-14T21:52:53.8400718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8400807Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8401187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8401302Z layer_outputs = layer_module( 2025-08-14T21:52:53.8401581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8401687Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8401985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8402096Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8402398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8402500Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8402806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8402905Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8402919Z 2025-08-14T21:52:53.8403051Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8403427Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8403582Z return mod(**inputs) 2025-08-14T21:52:53.8404031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8404189Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8404644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8404784Z layer_outputs = layer_module( 2025-08-14T21:52:53.8405078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8405257Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8405563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8405663Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8405966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8406068Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8406365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8406467Z key_states = self.k(current_states) 2025-08-14T21:52:53.8406479Z 2025-08-14T21:52:53.8406609Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8406863Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8406942Z return mod(**inputs) 2025-08-14T21:52:53.8407244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8407342Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8407644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8407731Z layer_outputs = layer_module( 2025-08-14T21:52:53.8408014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8408113Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8408419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8408544Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8408845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8409004Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8409475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8409705Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8409721Z 2025-08-14T21:52:53.8409897Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8410253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8410367Z return mod(**inputs) 2025-08-14T21:52:53.8410818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8410937Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8411394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8411511Z layer_outputs = layer_module( 2025-08-14T21:52:53.8411924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8412058Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8412501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8412665Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8413109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8413280Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8413646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8413761Z value_states = self.v(current_states) 2025-08-14T21:52:53.8413775Z 2025-08-14T21:52:53.8413931Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8414292Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8414386Z return mod(**inputs) 2025-08-14T21:52:53.8414698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8414791Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8415100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8415189Z layer_outputs = layer_module( 2025-08-14T21:52:53.8415468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8415571Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8415874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8415974Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8416277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8416378Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8416747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8416944Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8416962Z 2025-08-14T21:52:53.8417148Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8417530Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8417650Z return mod(**inputs) 2025-08-14T21:52:53.8418080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8418173Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8418479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8418574Z layer_outputs = layer_module( 2025-08-14T21:52:53.8418873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8418980Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8419339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8419443Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8428250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8428365Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8428766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8428926Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8428941Z 2025-08-14T21:52:53.8429083Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8429423Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8429507Z return mod(**inputs) 2025-08-14T21:52:53.8429952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8430052Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8430490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8430582Z layer_outputs = layer_module( 2025-08-14T21:52:53.8430961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8431060Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8431366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8431466Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8431765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8431873Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8432172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8432267Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8432287Z 2025-08-14T21:52:53.8432412Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8432659Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8432750Z return mod(**inputs) 2025-08-14T21:52:53.8433053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8433142Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8433450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8433538Z layer_outputs = layer_module( 2025-08-14T21:52:53.8433876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8433977Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8434345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8434452Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8434787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8434891Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8435198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8435316Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8435329Z 2025-08-14T21:52:53.8435462Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8435708Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8435791Z return mod(**inputs) 2025-08-14T21:52:53.8436100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8436190Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8436491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8436584Z layer_outputs = layer_module( 2025-08-14T21:52:53.8436862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8436974Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8437276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8437377Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8437709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8437811Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8438185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8438279Z key_states = self.k(current_states) 2025-08-14T21:52:53.8438292Z 2025-08-14T21:52:53.8438418Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8438668Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8438748Z return mod(**inputs) 2025-08-14T21:52:53.8439048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8439141Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8439442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8439536Z layer_outputs = layer_module( 2025-08-14T21:52:53.8439810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8439907Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8440212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8440310Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8440612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8440715Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8441013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8441262Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8441279Z 2025-08-14T21:52:53.8441403Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8441651Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8441743Z return mod(**inputs) 2025-08-14T21:52:53.8442072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8442169Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8442469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8442557Z layer_outputs = layer_module( 2025-08-14T21:52:53.8442863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8442962Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8443260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8443371Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8443670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8443779Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8444077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8444172Z value_states = self.v(current_states) 2025-08-14T21:52:53.8444184Z 2025-08-14T21:52:53.8444316Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8444566Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8444655Z return mod(**inputs) 2025-08-14T21:52:53.8444956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8445066Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8445372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8445481Z layer_outputs = layer_module( 2025-08-14T21:52:53.8445756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8445860Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8446162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8446267Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8446565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8446669Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8446973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8447102Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8447117Z 2025-08-14T21:52:53.8447252Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8447497Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8447580Z return mod(**inputs) 2025-08-14T21:52:53.8447888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8447979Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8448329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8448425Z layer_outputs = layer_module( 2025-08-14T21:52:53.8449094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8449201Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8449507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8449608Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8449942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8450051Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8450353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8450494Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8450507Z 2025-08-14T21:52:53.8450668Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8450932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8451028Z return mod(**inputs) 2025-08-14T21:52:53.8451333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8451435Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8451739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8451836Z layer_outputs = layer_module( 2025-08-14T21:52:53.8452112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8452208Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8452513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8452613Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8452910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8453069Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8453367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8453497Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8453510Z 2025-08-14T21:52:53.8453636Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8453881Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8453966Z return mod(**inputs) 2025-08-14T21:52:53.8454273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8454370Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8454672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8454760Z layer_outputs = layer_module( 2025-08-14T21:52:53.8455044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8455141Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8455437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8455556Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8455854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8456005Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8456304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8456424Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8456439Z 2025-08-14T21:52:53.8456571Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8456816Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8456898Z return mod(**inputs) 2025-08-14T21:52:53.8457202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8457319Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8457626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8457713Z layer_outputs = layer_module( 2025-08-14T21:52:53.8457989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8458115Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8458415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8458532Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8458829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8458972Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8459275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8459374Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8459386Z 2025-08-14T21:52:53.8459509Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8459759Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8459841Z return mod(**inputs) 2025-08-14T21:52:53.8460151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8460263Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8460563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8460691Z layer_outputs = layer_module( 2025-08-14T21:52:53.8460975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8461080Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8461381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8461494Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8461798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8461940Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8462240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8462357Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8462369Z 2025-08-14T21:52:53.8462500Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8462788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8462892Z return mod(**inputs) 2025-08-14T21:52:53.8469434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8469538Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8469841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8469937Z layer_outputs = layer_module( 2025-08-14T21:52:53.8470216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8470315Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8470626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8470737Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8471061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8471213Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8471513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8471618Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8471630Z 2025-08-14T21:52:53.8471780Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8472027Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8472122Z return mod(**inputs) 2025-08-14T21:52:53.8472426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8472517Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8472831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8472922Z layer_outputs = layer_module( 2025-08-14T21:52:53.8473211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8473308Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8473609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8473715Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8474015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8474146Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8474443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8474559Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8474572Z 2025-08-14T21:52:53.8474704Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8474952Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8475034Z return mod(**inputs) 2025-08-14T21:52:53.8475349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8475439Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8475751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8475840Z layer_outputs = layer_module( 2025-08-14T21:52:53.8476121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8476228Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8476530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8476634Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8476936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8477036Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8477397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8477492Z key_states = self.k(current_states) 2025-08-14T21:52:53.8477505Z 2025-08-14T21:52:53.8477698Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8477955Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8478034Z return mod(**inputs) 2025-08-14T21:52:53.8478345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8478436Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8478759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8478860Z layer_outputs = layer_module( 2025-08-14T21:52:53.8479136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8479234Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8479557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8479659Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8479968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8480075Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8480375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8480537Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8480550Z 2025-08-14T21:52:53.8480679Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8480928Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8481009Z return mod(**inputs) 2025-08-14T21:52:53.8481382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8481477Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8481805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8481894Z layer_outputs = layer_module( 2025-08-14T21:52:53.8482199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8482296Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8482605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8482703Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8482999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8483106Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8483408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8483502Z value_states = self.v(current_states) 2025-08-14T21:52:53.8483520Z 2025-08-14T21:52:53.8483645Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8483893Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8483977Z return mod(**inputs) 2025-08-14T21:52:53.8484280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8484367Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8484674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8484760Z layer_outputs = layer_module( 2025-08-14T21:52:53.8485049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8485147Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8485447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8485552Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8485852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8485952Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8486279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8486411Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8486424Z 2025-08-14T21:52:53.8486559Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8486844Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8486927Z return mod(**inputs) 2025-08-14T21:52:53.8487243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8487336Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8487647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8487739Z layer_outputs = layer_module( 2025-08-14T21:52:53.8488024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8488131Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8488434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8488534Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8488842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8488966Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8489275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8489409Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8489443Z 2025-08-14T21:52:53.8489572Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8489832Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8489915Z return mod(**inputs) 2025-08-14T21:52:53.8490226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8490315Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8490620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8490717Z layer_outputs = layer_module( 2025-08-14T21:52:53.8490998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8491093Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8491399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8491500Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8491855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8491957Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8496486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8496589Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8496603Z 2025-08-14T21:52:53.8496731Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8496978Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8497068Z return mod(**inputs) 2025-08-14T21:52:53.8497373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8497475Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8497803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8497895Z layer_outputs = layer_module( 2025-08-14T21:52:53.8498186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8498284Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8498617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8498719Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8499020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 485, in forward 2025-08-14T21:52:53.8499196Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:52:53.8499211Z 2025-08-14T21:52:53.8499336Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8499581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8499672Z return mod(**inputs) 2025-08-14T21:52:53.8499974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8500124Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8500430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8500516Z layer_outputs = layer_module( 2025-08-14T21:52:53.8500797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8500929Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8501228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8501358Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8501660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8501768Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8502068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8502164Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8502177Z 2025-08-14T21:52:53.8502309Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8502555Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8502645Z return mod(**inputs) 2025-08-14T21:52:53.8502948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8503037Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8503342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8503429Z layer_outputs = layer_module( 2025-08-14T21:52:53.8503703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8503801Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8504098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8504206Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8504504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8504607Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8504911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8505005Z key_states = self.k(current_states) 2025-08-14T21:52:53.8505017Z 2025-08-14T21:52:53.8505184Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8505433Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8505512Z return mod(**inputs) 2025-08-14T21:52:53.8505824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8505936Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8506262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8506381Z layer_outputs = layer_module( 2025-08-14T21:52:53.8506735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8506841Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8507142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8507246Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8507554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8507660Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8507957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8508123Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8508160Z 2025-08-14T21:52:53.8508287Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8508538Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8508641Z return mod(**inputs) 2025-08-14T21:52:53.8508945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8509049Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8509352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8509446Z layer_outputs = layer_module( 2025-08-14T21:52:53.8509722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8509818Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8510125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8510229Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8510525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8510635Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8510932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8511036Z value_states = self.v(current_states) 2025-08-14T21:52:53.8511049Z 2025-08-14T21:52:53.8511174Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8511424Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8511516Z return mod(**inputs) 2025-08-14T21:52:53.8511820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8511920Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8512219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8512308Z layer_outputs = layer_module( 2025-08-14T21:52:53.8512592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8512710Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8513009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8513115Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8513413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8513543Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8513842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8513976Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8513988Z 2025-08-14T21:52:53.8514125Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8514376Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8514458Z return mod(**inputs) 2025-08-14T21:52:53.8514768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8514856Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8515168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8515259Z layer_outputs = layer_module( 2025-08-14T21:52:53.8515536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8515659Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8527679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8528016Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8528480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8528606Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8528921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8529074Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8529090Z 2025-08-14T21:52:53.8529227Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8529488Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8529582Z return mod(**inputs) 2025-08-14T21:52:53.8529899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8530007Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8530318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8530410Z layer_outputs = layer_module( 2025-08-14T21:52:53.8530703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8530806Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8531116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8531224Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8531522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8531635Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8531936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8532041Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8532054Z 2025-08-14T21:52:53.8532189Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8532475Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8532569Z return mod(**inputs) 2025-08-14T21:52:53.8532878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8532969Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8533308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8533401Z layer_outputs = layer_module( 2025-08-14T21:52:53.8533691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8533787Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8534092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8534210Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8534511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8534657Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8534965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8535090Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8535103Z 2025-08-14T21:52:53.8535260Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8535587Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8535781Z return mod(**inputs) 2025-08-14T21:52:53.8536123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8536215Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8536533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8536625Z layer_outputs = layer_module( 2025-08-14T21:52:53.8536909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8537016Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8537323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8537437Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8537744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8537889Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8538195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8538296Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8538309Z 2025-08-14T21:52:53.8538435Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8538688Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8538769Z return mod(**inputs) 2025-08-14T21:52:53.8539078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8539169Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8539474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8539571Z layer_outputs = layer_module( 2025-08-14T21:52:53.8539856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8539951Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8540282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8540396Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8540699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8540870Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8541170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8541290Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8541303Z 2025-08-14T21:52:53.8541433Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8541745Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8541828Z return mod(**inputs) 2025-08-14T21:52:53.8542134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8542234Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8542537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8542622Z layer_outputs = layer_module( 2025-08-14T21:52:53.8542910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8543006Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8543343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8543454Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8543776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8543928Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8544230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8544330Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8544350Z 2025-08-14T21:52:53.8544482Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8544731Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8544823Z return mod(**inputs) 2025-08-14T21:52:53.8545131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8545224Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8545538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8545626Z layer_outputs = layer_module( 2025-08-14T21:52:53.8545915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8546013Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8546311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8546427Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8546725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8546835Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8547140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8547241Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8547253Z 2025-08-14T21:52:53.8547394Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8547668Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8547756Z return mod(**inputs) 2025-08-14T21:52:53.8548069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8548160Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8548493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8548583Z layer_outputs = layer_module( 2025-08-14T21:52:53.8549239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8549350Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8549654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8549787Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8554190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8554297Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8554608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8554705Z key_states = self.k(current_states) 2025-08-14T21:52:53.8554718Z 2025-08-14T21:52:53.8554848Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8555165Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8555247Z return mod(**inputs) 2025-08-14T21:52:53.8555551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8555681Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8555988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8556093Z layer_outputs = layer_module( 2025-08-14T21:52:53.8556378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8556477Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8556792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8556892Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8557207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8557313Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8557616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8557788Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8557803Z 2025-08-14T21:52:53.8557940Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8558189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8558281Z return mod(**inputs) 2025-08-14T21:52:53.8558587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8558687Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8558991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8559084Z layer_outputs = layer_module( 2025-08-14T21:52:53.8559373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8559474Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8559810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8559911Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8560212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8560322Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8560655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8560753Z value_states = self.v(current_states) 2025-08-14T21:52:53.8560768Z 2025-08-14T21:52:53.8560906Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8561248Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8561345Z return mod(**inputs) 2025-08-14T21:52:53.8561650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8561743Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8562053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8562145Z layer_outputs = layer_module( 2025-08-14T21:52:53.8562424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8562533Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8562832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8562966Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8563268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8563392Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8563703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8563839Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8563852Z 2025-08-14T21:52:53.8563986Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8564253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8564364Z return mod(**inputs) 2025-08-14T21:52:53.8564754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8564848Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8565151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8565256Z layer_outputs = layer_module( 2025-08-14T21:52:53.8565538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8565645Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8565948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8566049Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8566360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8566460Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8566770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8566902Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8566916Z 2025-08-14T21:52:53.8567042Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8567294Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8567403Z return mod(**inputs) 2025-08-14T21:52:53.8567713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8567812Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8568136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8568233Z layer_outputs = layer_module( 2025-08-14T21:52:53.8568561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8568665Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8568977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8569080Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8569383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8569489Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8569790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8569894Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8569907Z 2025-08-14T21:52:53.8570036Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8570283Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8570398Z return mod(**inputs) 2025-08-14T21:52:53.8570703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8570825Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8571130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8571222Z layer_outputs = layer_module( 2025-08-14T21:52:53.8571509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8571607Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8571909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8572019Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8572321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8572434Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8572734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8572833Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8572845Z 2025-08-14T21:52:53.8572984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8573237Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8573329Z return mod(**inputs) 2025-08-14T21:52:53.8573633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8573728Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8574042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8574133Z layer_outputs = layer_module( 2025-08-14T21:52:53.8574413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8574523Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8574822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8574963Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8575265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8575369Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8575704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8575806Z key_states = self.k(current_states) 2025-08-14T21:52:53.8575819Z 2025-08-14T21:52:53.8575948Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8576205Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8576286Z return mod(**inputs) 2025-08-14T21:52:53.8576598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8576688Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8576995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8577091Z layer_outputs = layer_module( 2025-08-14T21:52:53.8577372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8577478Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8577778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8577902Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8578211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8578339Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8578641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8578866Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8578879Z 2025-08-14T21:52:53.8579010Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8587705Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8587792Z return mod(**inputs) 2025-08-14T21:52:53.8588204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8588312Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8588727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8588819Z layer_outputs = layer_module( 2025-08-14T21:52:53.8589208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8589316Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8589735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8589845Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8590265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8590385Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8590686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8590797Z value_states = self.v(current_states) 2025-08-14T21:52:53.8590809Z 2025-08-14T21:52:53.8590939Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8591188Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8591283Z return mod(**inputs) 2025-08-14T21:52:53.8591612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8591713Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8592018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8592107Z layer_outputs = layer_module( 2025-08-14T21:52:53.8592411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8592508Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8592809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8592918Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8593228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8593371Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8595813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8595948Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8595961Z 2025-08-14T21:52:53.8596095Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8596341Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8596427Z return mod(**inputs) 2025-08-14T21:52:53.8596757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8596845Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8597153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8597264Z layer_outputs = layer_module( 2025-08-14T21:52:53.8597594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8597701Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8597999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8598104Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8598401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8598503Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8598808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8598939Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8598952Z 2025-08-14T21:52:53.8599080Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8599325Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8599406Z return mod(**inputs) 2025-08-14T21:52:53.8599714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8599802Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8600105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8600198Z layer_outputs = layer_module( 2025-08-14T21:52:53.8600475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8600575Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8600874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8600973Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8601372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8601475Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8601774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8601895Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8601908Z 2025-08-14T21:52:53.8602035Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8602289Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8602369Z return mod(**inputs) 2025-08-14T21:52:53.8602670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8602769Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8603076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8603170Z layer_outputs = layer_module( 2025-08-14T21:52:53.8603448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8603546Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8603856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8603958Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8604279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 524, in forward 2025-08-14T21:52:53.8604453Z layer_output = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:52:53.8604487Z 2025-08-14T21:52:53.8604614Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8604870Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8604951Z return mod(**inputs) 2025-08-14T21:52:53.8605253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8605354Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8605658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8605753Z layer_outputs = layer_module( 2025-08-14T21:52:53.8606030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8606127Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8606434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8606547Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8606846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8606996Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8607296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8607424Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8607437Z 2025-08-14T21:52:53.8607563Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8607862Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8607958Z return mod(**inputs) 2025-08-14T21:52:53.8608333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8608433Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8608765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8608854Z layer_outputs = layer_module( 2025-08-14T21:52:53.8609141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8609238Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8609558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8609683Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8609986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8610134Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8610439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8610537Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8610551Z 2025-08-14T21:52:53.8610691Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8610940Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8611022Z return mod(**inputs) 2025-08-14T21:52:53.8611334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8611424Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8611737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8611849Z layer_outputs = layer_module( 2025-08-14T21:52:53.8612130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8612273Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8612576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8612693Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8612991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8613132Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8613440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8613550Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8613562Z 2025-08-14T21:52:53.8613689Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8613945Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8614028Z return mod(**inputs) 2025-08-14T21:52:53.8614342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8614430Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8614733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8614829Z layer_outputs = layer_module( 2025-08-14T21:52:53.8615108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8615213Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8615513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8615622Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8615926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8616064Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8616385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8616490Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8616503Z 2025-08-14T21:52:53.8616631Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8616901Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8616982Z return mod(**inputs) 2025-08-14T21:52:53.8617284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8617384Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8617692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8617782Z layer_outputs = layer_module( 2025-08-14T21:52:53.8618070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8618166Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8618463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8618573Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8618873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8618975Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8619303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8619397Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8619431Z 2025-08-14T21:52:53.8619564Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8619808Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8619891Z return mod(**inputs) 2025-08-14T21:52:53.8620202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8620291Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8620600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8620687Z layer_outputs = layer_module( 2025-08-14T21:52:53.8620962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8621071Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8621368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8621468Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8621776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8621875Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8622177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8622307Z key_states = self.k(current_states) 2025-08-14T21:52:53.8622324Z 2025-08-14T21:52:53.8622464Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8626962Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8627053Z return mod(**inputs) 2025-08-14T21:52:53.8627366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8627457Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8627760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8627887Z layer_outputs = layer_module( 2025-08-14T21:52:53.8628167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8628266Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8628597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8628696Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8629000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8629101Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8629404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8629579Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8629592Z 2025-08-14T21:52:53.8629721Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8629970Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8630051Z return mod(**inputs) 2025-08-14T21:52:53.8630351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8630447Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8630749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8630860Z layer_outputs = layer_module( 2025-08-14T21:52:53.8631143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8631260Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8631565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8631665Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8631965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8632071Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8632373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8632466Z value_states = self.v(current_states) 2025-08-14T21:52:53.8632480Z 2025-08-14T21:52:53.8632614Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8632864Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8632956Z return mod(**inputs) 2025-08-14T21:52:53.8633260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8633351Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8633658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8633744Z layer_outputs = layer_module( 2025-08-14T21:52:53.8634021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8634126Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8634424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8634531Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8634831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8634935Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8635263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8635402Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8635414Z 2025-08-14T21:52:53.8635546Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8635792Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8635893Z return mod(**inputs) 2025-08-14T21:52:53.8636210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8636301Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8636604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8636716Z layer_outputs = layer_module( 2025-08-14T21:52:53.8637048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8637224Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8637525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8637625Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8637933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8638033Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8638343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8638503Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8638516Z 2025-08-14T21:52:53.8638640Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8638916Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8638995Z return mod(**inputs) 2025-08-14T21:52:53.8639302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8639400Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8639703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8639801Z layer_outputs = layer_module( 2025-08-14T21:52:53.8640080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8640180Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8640487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8640588Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8640894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8640994Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8641393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8641495Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8641508Z 2025-08-14T21:52:53.8641633Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8641879Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8641969Z return mod(**inputs) 2025-08-14T21:52:53.8642272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8642370Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8642675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8642762Z layer_outputs = layer_module( 2025-08-14T21:52:53.8643094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8643192Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8643492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8643620Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8643918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8644031Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8644328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8644424Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8644437Z 2025-08-14T21:52:53.8644570Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8644817Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8644901Z return mod(**inputs) 2025-08-14T21:52:53.8645204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8645294Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8645604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8645692Z layer_outputs = layer_module( 2025-08-14T21:52:53.8645999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8646100Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8646421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8646527Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8646827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8646929Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8647232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8647326Z key_states = self.k(current_states) 2025-08-14T21:52:53.8647339Z 2025-08-14T21:52:53.8647474Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8647721Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8647801Z return mod(**inputs) 2025-08-14T21:52:53.8648112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8648204Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8648512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8648606Z layer_outputs = layer_module( 2025-08-14T21:52:53.8649223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8649334Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8649639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8649738Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8650043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8650147Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8650448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8650666Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8650680Z 2025-08-14T21:52:53.8650810Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8651066Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8651146Z return mod(**inputs) 2025-08-14T21:52:53.8651536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8655803Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8656114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8656213Z layer_outputs = layer_module( 2025-08-14T21:52:53.8656493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8656592Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8656899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8656999Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8657303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8657412Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8657711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8657850Z value_states = self.v(current_states) 2025-08-14T21:52:53.8657863Z 2025-08-14T21:52:53.8657989Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8658236Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8658360Z return mod(**inputs) 2025-08-14T21:52:53.8658666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8658756Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8659065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8659154Z layer_outputs = layer_module( 2025-08-14T21:52:53.8659473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8659592Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8659894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8660001Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8660300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8660412Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8660712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8660846Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8660858Z 2025-08-14T21:52:53.8660989Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8661237Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8661318Z return mod(**inputs) 2025-08-14T21:52:53.8661633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8661724Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8662038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8662127Z layer_outputs = layer_module( 2025-08-14T21:52:53.8662426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8662534Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8662836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8662942Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8663269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8663371Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8663681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8663815Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8663830Z 2025-08-14T21:52:53.8663957Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8664209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8664291Z return mod(**inputs) 2025-08-14T21:52:53.8664605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8664694Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8664999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8665096Z layer_outputs = layer_module( 2025-08-14T21:52:53.8665372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8665490Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8665843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8665964Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8666344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8666446Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8666744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8666850Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8666862Z 2025-08-14T21:52:53.8666992Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8667247Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8667328Z return mod(**inputs) 2025-08-14T21:52:53.8667630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8667727Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8668078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8668169Z layer_outputs = layer_module( 2025-08-14T21:52:53.8668454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8668552Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8668859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8668971Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8669269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8669419Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8669721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8669852Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8669865Z 2025-08-14T21:52:53.8670014Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8670261Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8670346Z return mod(**inputs) 2025-08-14T21:52:53.8670677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8670768Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8671081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8671172Z layer_outputs = layer_module( 2025-08-14T21:52:53.8671459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8671558Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8671860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8671979Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8672276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8672418Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8672726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8672825Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8672861Z 2025-08-14T21:52:53.8672994Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8673241Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8673347Z return mod(**inputs) 2025-08-14T21:52:53.8673658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8673750Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8674061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8674151Z layer_outputs = layer_module( 2025-08-14T21:52:53.8674432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8674540Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8674843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8674956Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8675260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8675403Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8675711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8675819Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8675832Z 2025-08-14T21:52:53.8675956Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8676211Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8676293Z return mod(**inputs) 2025-08-14T21:52:53.8676605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8676699Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8677004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8677102Z layer_outputs = layer_module( 2025-08-14T21:52:53.8677381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8677500Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8677810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8677919Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8678253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8678394Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8678697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8678805Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8678820Z 2025-08-14T21:52:53.8678945Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8679196Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8679278Z return mod(**inputs) 2025-08-14T21:52:53.8679581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8679678Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8679982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8680073Z layer_outputs = layer_module( 2025-08-14T21:52:53.8680416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8680538Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8685095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8685233Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8685541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 217, in forward 2025-08-14T21:52:53.8685712Z hidden_states = hidden_states + self.dropout(forwarded_states) 2025-08-14T21:52:53.8685725Z 2025-08-14T21:52:53.8685856Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8686116Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8686208Z return mod(**inputs) 2025-08-14T21:52:53.8686515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8686619Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8686925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8687017Z layer_outputs = layer_module( 2025-08-14T21:52:53.8687305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8687407Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8687721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8687825Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8688129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8688241Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8688538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8688635Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8688655Z 2025-08-14T21:52:53.8688784Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8689032Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8689121Z return mod(**inputs) 2025-08-14T21:52:53.8690535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8690642Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8690957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8691079Z layer_outputs = layer_module( 2025-08-14T21:52:53.8691365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8691467Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8691767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8691877Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8692180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8692284Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8692591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8692689Z key_states = self.k(current_states) 2025-08-14T21:52:53.8692701Z 2025-08-14T21:52:53.8692834Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8693079Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8693182Z return mod(**inputs) 2025-08-14T21:52:53.8693490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8693580Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8693904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8693999Z layer_outputs = layer_module( 2025-08-14T21:52:53.8694278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8694384Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8694687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8694847Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8695238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8695342Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8695649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8695811Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8695824Z 2025-08-14T21:52:53.8695950Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8696209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8696290Z return mod(**inputs) 2025-08-14T21:52:53.8696596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8696694Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8697001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8697097Z layer_outputs = layer_module( 2025-08-14T21:52:53.8697377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8697476Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8697789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8697914Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8698226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8698327Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8698653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8698761Z value_states = self.v(current_states) 2025-08-14T21:52:53.8698774Z 2025-08-14T21:52:53.8698909Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8699206Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8699295Z return mod(**inputs) 2025-08-14T21:52:53.8699606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8699704Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8700007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8700099Z layer_outputs = layer_module( 2025-08-14T21:52:53.8700386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8700488Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8700788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8700919Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8701217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8701346Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8701647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8701783Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8701796Z 2025-08-14T21:52:53.8701935Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8702185Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8702279Z return mod(**inputs) 2025-08-14T21:52:53.8702586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8702677Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8702987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8703075Z layer_outputs = layer_module( 2025-08-14T21:52:53.8703355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8703459Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8703759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8703871Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8704171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8704275Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8704583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8704718Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8704731Z 2025-08-14T21:52:53.8704872Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8705126Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8705207Z return mod(**inputs) 2025-08-14T21:52:53.8705542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8705633Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8705935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8706030Z layer_outputs = layer_module( 2025-08-14T21:52:53.8706329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8706435Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8706737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8706837Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8707144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8707244Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8707546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8707647Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8707659Z 2025-08-14T21:52:53.8707786Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8708040Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8708120Z return mod(**inputs) 2025-08-14T21:52:53.8708421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8708539Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8708841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8708957Z layer_outputs = layer_module( 2025-08-14T21:52:53.8709255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8709385Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8713926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8714032Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8714331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8714441Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8714740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8714842Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8714856Z 2025-08-14T21:52:53.8714983Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8715231Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8715320Z return mod(**inputs) 2025-08-14T21:52:53.8715625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8715721Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8716025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8716114Z layer_outputs = layer_module( 2025-08-14T21:52:53.8716398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8716496Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8716794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8716901Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8717236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8717347Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8717648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8717741Z key_states = self.k(current_states) 2025-08-14T21:52:53.8717754Z 2025-08-14T21:52:53.8717911Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8718162Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8718246Z return mod(**inputs) 2025-08-14T21:52:53.8718556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8718647Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8718956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8719048Z layer_outputs = layer_module( 2025-08-14T21:52:53.8719327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8719438Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8719740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8719850Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8720153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8720277Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8720586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8720767Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8720779Z 2025-08-14T21:52:53.8720906Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8721240Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8721325Z return mod(**inputs) 2025-08-14T21:52:53.8721640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8721732Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8722036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8722140Z layer_outputs = layer_module( 2025-08-14T21:52:53.8722421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8722520Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8722833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8722934Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8723244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8723349Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8723650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8723781Z value_states = self.v(current_states) 2025-08-14T21:52:53.8723800Z 2025-08-14T21:52:53.8723946Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8724272Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8724356Z return mod(**inputs) 2025-08-14T21:52:53.8724662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8724782Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8725088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8725176Z layer_outputs = layer_module( 2025-08-14T21:52:53.8725460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8725624Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8725939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8726041Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8726339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8726455Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8726755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8726894Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8726906Z 2025-08-14T21:52:53.8727032Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8727276Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8727366Z return mod(**inputs) 2025-08-14T21:52:53.8727674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8727787Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8728150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8728266Z layer_outputs = layer_module( 2025-08-14T21:52:53.8728552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8728651Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8728951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8729057Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8729363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8729465Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8729771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8729904Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8729917Z 2025-08-14T21:52:53.8730051Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8730298Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8730379Z return mod(**inputs) 2025-08-14T21:52:53.8730696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8730785Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8731096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8731185Z layer_outputs = layer_module( 2025-08-14T21:52:53.8731461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8731567Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8731865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8731967Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8732269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8732392Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8732699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8732793Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8732805Z 2025-08-14T21:52:53.8732952Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8733204Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8733290Z return mod(**inputs) 2025-08-14T21:52:53.8733598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8733687Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8733990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8734083Z layer_outputs = layer_module( 2025-08-14T21:52:53.8734362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8734457Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8734763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8734875Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8735181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8735351Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8735654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8735801Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8735814Z 2025-08-14T21:52:53.8735937Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8736190Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8736271Z return mod(**inputs) 2025-08-14T21:52:53.8736573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8736670Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8736972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8737061Z layer_outputs = layer_module( 2025-08-14T21:52:53.8737351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8737449Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8737759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8737871Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8738169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8738370Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8747104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8747215Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8747240Z 2025-08-14T21:52:53.8747392Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8747717Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8747810Z return mod(**inputs) 2025-08-14T21:52:53.8748221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8748321Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8749151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8749252Z layer_outputs = layer_module( 2025-08-14T21:52:53.8749642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8749783Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8750092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8750214Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8750511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8750655Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8750966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8751075Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8751088Z 2025-08-14T21:52:53.8751224Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8751472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8751555Z return mod(**inputs) 2025-08-14T21:52:53.8751870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8751990Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8752303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8752437Z layer_outputs = layer_module( 2025-08-14T21:52:53.8752725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8752869Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8755249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8755362Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8755672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8755814Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8756120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8756221Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8756234Z 2025-08-14T21:52:53.8756361Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8756619Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8756703Z return mod(**inputs) 2025-08-14T21:52:53.8757056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8757159Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8757461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8757560Z layer_outputs = layer_module( 2025-08-14T21:52:53.8757840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8757941Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8758253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8758357Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8758666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8758793Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8759092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8759194Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8759206Z 2025-08-14T21:52:53.8759332Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8759603Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8759694Z return mod(**inputs) 2025-08-14T21:52:53.8759997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8760091Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8760396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8760487Z layer_outputs = layer_module( 2025-08-14T21:52:53.8760775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8760871Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8761247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8761351Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8761648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8761776Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8762073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8762188Z key_states = self.k(current_states) 2025-08-14T21:52:53.8762200Z 2025-08-14T21:52:53.8762338Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8762589Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8762676Z return mod(**inputs) 2025-08-14T21:52:53.8762981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8763069Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8763383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8763472Z layer_outputs = layer_module( 2025-08-14T21:52:53.8763750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8763859Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8764161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8764266Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8764564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8764665Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8764969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8765128Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8765141Z 2025-08-14T21:52:53.8765274Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8765522Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8765601Z return mod(**inputs) 2025-08-14T21:52:53.8765910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8766000Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8766324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8766423Z layer_outputs = layer_module( 2025-08-14T21:52:53.8766701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8766802Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8767129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8767246Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8767590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8767762Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8768071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8768166Z value_states = self.v(current_states) 2025-08-14T21:52:53.8768179Z 2025-08-14T21:52:53.8768304Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8768557Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8768639Z return mod(**inputs) 2025-08-14T21:52:53.8768945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8769044Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8769374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8769471Z layer_outputs = layer_module( 2025-08-14T21:52:53.8769752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8769874Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8770183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8770282Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8770581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8770689Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8770990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8771131Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8771143Z 2025-08-14T21:52:53.8771272Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8771520Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8771613Z return mod(**inputs) 2025-08-14T21:52:53.8771917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8772012Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8772314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8772403Z layer_outputs = layer_module( 2025-08-14T21:52:53.8772688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8772785Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8773089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8773195Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8773495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8773602Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8773924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8774064Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8774076Z 2025-08-14T21:52:53.8774211Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8774477Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8774567Z return mod(**inputs) 2025-08-14T21:52:53.8774874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8774965Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8775275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8775364Z layer_outputs = layer_module( 2025-08-14T21:52:53.8775645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8775754Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8776056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8776160Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8776463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8776565Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8776895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8776989Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8777022Z 2025-08-14T21:52:53.8777153Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8777406Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8777488Z return mod(**inputs) 2025-08-14T21:52:53.8777798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8777888Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8778196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8778290Z layer_outputs = layer_module( 2025-08-14T21:52:53.8778566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8778671Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8778970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8779070Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8779377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 485, in forward 2025-08-14T21:52:53.8779538Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:52:53.8779551Z 2025-08-14T21:52:53.8779679Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8779932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8780013Z return mod(**inputs) 2025-08-14T21:52:53.8780324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8780416Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8780716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8780811Z layer_outputs = layer_module( 2025-08-14T21:52:53.8781086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8781209Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8781510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8781612Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8781995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8786331Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8786639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8786746Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8786759Z 2025-08-14T21:52:53.8786890Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8787144Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8787225Z return mod(**inputs) 2025-08-14T21:52:53.8787531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8787629Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8787930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8788021Z layer_outputs = layer_module( 2025-08-14T21:52:53.8788306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8788446Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8788753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8788916Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8789213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8789327Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8789626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8789725Z key_states = self.k(current_states) 2025-08-14T21:52:53.8789738Z 2025-08-14T21:52:53.8789865Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8790111Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8790199Z return mod(**inputs) 2025-08-14T21:52:53.8790506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8790593Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8790903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8790989Z layer_outputs = layer_module( 2025-08-14T21:52:53.8791277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8791372Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8791671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8791782Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8792084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8792188Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8792498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8792662Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8792675Z 2025-08-14T21:52:53.8792810Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8793081Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8793162Z return mod(**inputs) 2025-08-14T21:52:53.8793474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8793585Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8793895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8793985Z layer_outputs = layer_module( 2025-08-14T21:52:53.8794260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8794365Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8794661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8794762Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8795066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8795168Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8795477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8795573Z value_states = self.v(current_states) 2025-08-14T21:52:53.8795586Z 2025-08-14T21:52:53.8795711Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8795987Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8796066Z return mod(**inputs) 2025-08-14T21:52:53.8796457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8796550Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8796929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8797022Z layer_outputs = layer_module( 2025-08-14T21:52:53.8797303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8797401Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8797718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8797821Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8798131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8798236Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8798534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8798675Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8798688Z 2025-08-14T21:52:53.8798814Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8799065Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8799146Z return mod(**inputs) 2025-08-14T21:52:53.8799452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8799549Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8799854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8799941Z layer_outputs = layer_module( 2025-08-14T21:52:53.8800228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8800323Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8800652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8800757Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8801056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8801286Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8801587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8801720Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8801738Z 2025-08-14T21:52:53.8801862Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8802109Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8802194Z return mod(**inputs) 2025-08-14T21:52:53.8802500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8802587Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8802895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8802986Z layer_outputs = layer_module( 2025-08-14T21:52:53.8803272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8803368Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8803689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8803800Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8804124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8804230Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8804540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8804636Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8804648Z 2025-08-14T21:52:53.8804782Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8805030Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8805112Z return mod(**inputs) 2025-08-14T21:52:53.8805431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8805520Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8805820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8805917Z layer_outputs = layer_module( 2025-08-14T21:52:53.8806202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8806304Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8806610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8806722Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8807030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8807178Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8807487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8807610Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8807623Z 2025-08-14T21:52:53.8807746Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8808024Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8808106Z return mod(**inputs) 2025-08-14T21:52:53.8808412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8808508Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8808834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8808929Z layer_outputs = layer_module( 2025-08-14T21:52:53.8809211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8809307Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8809618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8809732Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8810040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8810181Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8810487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8810598Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8810611Z 2025-08-14T21:52:53.8810758Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8811054Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8815383Z return mod(**inputs) 2025-08-14T21:52:53.8815693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8815821Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8816126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8816216Z layer_outputs = layer_module( 2025-08-14T21:52:53.8816504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8816602Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8816919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8817035Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8817336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8817488Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8817791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8817903Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8817916Z 2025-08-14T21:52:53.8818050Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8818299Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8818390Z return mod(**inputs) 2025-08-14T21:52:53.8818697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8818787Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8819161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8819255Z layer_outputs = layer_module( 2025-08-14T21:52:53.8819536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8819647Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8819973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8820095Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8820396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8820569Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8820878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8820979Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8820992Z 2025-08-14T21:52:53.8821128Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8821376Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8821456Z return mod(**inputs) 2025-08-14T21:52:53.8821766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8821857Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8822160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8822258Z layer_outputs = layer_module( 2025-08-14T21:52:53.8822537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8822642Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8822969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8823070Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8823404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8823504Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8823812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8823906Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8823918Z 2025-08-14T21:52:53.8824045Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8824301Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8824380Z return mod(**inputs) 2025-08-14T21:52:53.8824681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8824780Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8825083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8825178Z layer_outputs = layer_module( 2025-08-14T21:52:53.8825508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8825673Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8825984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8826084Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8826385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8826493Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8826793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8826892Z key_states = self.k(current_states) 2025-08-14T21:52:53.8826906Z 2025-08-14T21:52:53.8827033Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8827325Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8827429Z return mod(**inputs) 2025-08-14T21:52:53.8827731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8827828Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8828150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8828239Z layer_outputs = layer_module( 2025-08-14T21:52:53.8828526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8828625Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8828927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8829033Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8829336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8829445Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8829743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8829901Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8829916Z 2025-08-14T21:52:53.8830048Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8830293Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8830412Z return mod(**inputs) 2025-08-14T21:52:53.8830720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8830832Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8831146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8831234Z layer_outputs = layer_module( 2025-08-14T21:52:53.8831518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8831621Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8831925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8832032Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8832334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8832437Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8832744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8832840Z value_states = self.v(current_states) 2025-08-14T21:52:53.8832852Z 2025-08-14T21:52:53.8832990Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8833235Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8833316Z return mod(**inputs) 2025-08-14T21:52:53.8833625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8833715Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8834018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8834120Z layer_outputs = layer_module( 2025-08-14T21:52:53.8834397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8834503Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8834827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8834928Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8835238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8835338Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8835657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8835798Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8835813Z 2025-08-14T21:52:53.8835939Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8836195Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8836278Z return mod(**inputs) 2025-08-14T21:52:53.8836584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8836683Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8836985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8837082Z layer_outputs = layer_module( 2025-08-14T21:52:53.8837362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8837460Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8837765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8837898Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8838198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8838327Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8838627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8838770Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8838783Z 2025-08-14T21:52:53.8838908Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8839156Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8839253Z return mod(**inputs) 2025-08-14T21:52:53.8839557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8839650Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8840016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8844343Z layer_outputs = layer_module( 2025-08-14T21:52:53.8844645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8844747Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8845053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:52:53.8845169Z self_attention_outputs = self.layer[0]( 2025-08-14T21:52:53.8845470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:52:53.8845581Z attention_output = self.SelfAttention( 2025-08-14T21:52:53.8845882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8845985Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8845997Z 2025-08-14T21:52:53.8846130Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8846380Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8846461Z return mod(**inputs) 2025-08-14T21:52:53.8846799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8846890Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8847200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8847315Z layer_outputs = layer_module( 2025-08-14T21:52:53.8847594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8847699Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8848003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8848109Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8848407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8848511Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8849151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:52:53.8849249Z query_states = self.q(hidden_states) 2025-08-14T21:52:53.8849262Z 2025-08-14T21:52:53.8849394Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8849653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8849790Z return mod(**inputs) 2025-08-14T21:52:53.8850101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8850191Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8850523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8850616Z layer_outputs = layer_module( 2025-08-14T21:52:53.8850898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8850994Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8851302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8851407Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8851710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8851814Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8852111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:52:53.8852214Z key_states = self.k(current_states) 2025-08-14T21:52:53.8852227Z 2025-08-14T21:52:53.8852352Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8852608Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8852688Z return mod(**inputs) 2025-08-14T21:52:53.8852992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8853094Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8853399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8853486Z layer_outputs = layer_module( 2025-08-14T21:52:53.8853778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8853874Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8854181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8854325Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8854745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8854860Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8855160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:52:53.8855352Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:52:53.8855372Z 2025-08-14T21:52:53.8855499Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8855748Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8855835Z return mod(**inputs) 2025-08-14T21:52:53.8856138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8856230Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8856542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8856630Z layer_outputs = layer_module( 2025-08-14T21:52:53.8856918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8857014Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8857317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8857446Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8857751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8857855Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8858184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:52:53.8858283Z value_states = self.v(current_states) 2025-08-14T21:52:53.8858296Z 2025-08-14T21:52:53.8858479Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8858727Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8858813Z return mod(**inputs) 2025-08-14T21:52:53.8859132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8859221Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8859533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8859621Z layer_outputs = layer_module( 2025-08-14T21:52:53.8859901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8860009Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8860311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8860412Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8860717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8860822Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8861125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:52:53.8861260Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:52:53.8861273Z 2025-08-14T21:52:53.8861399Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8861652Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8861734Z return mod(**inputs) 2025-08-14T21:52:53.8862071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8881741Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8882195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8882294Z layer_outputs = layer_module( 2025-08-14T21:52:53.8882690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8882810Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8883134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8883272Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8883720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8883833Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8884146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:52:53.8884290Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:52:53.8884305Z 2025-08-14T21:52:53.8884454Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8884715Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8884803Z return mod(**inputs) 2025-08-14T21:52:53.8885126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8885258Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8885579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8885696Z layer_outputs = layer_module( 2025-08-14T21:52:53.8885987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8886095Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8886399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8886515Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8886816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:52:53.8886926Z attention_output = self.EncDecAttention( 2025-08-14T21:52:53.8887235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:52:53.8887333Z attn_output = self.o(attn_output) 2025-08-14T21:52:53.8887349Z 2025-08-14T21:52:53.8887509Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8887803Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8887886Z return mod(**inputs) 2025-08-14T21:52:53.8888208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8888301Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8888612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8888709Z layer_outputs = layer_module( 2025-08-14T21:52:53.8888991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8889096Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8889406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:52:53.8889510Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:52:53.8889846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 524, in forward 2025-08-14T21:52:53.8890008Z layer_output = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:52:53.8890021Z 2025-08-14T21:52:53.8890154Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8890429Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8890517Z return mod(**inputs) 2025-08-14T21:52:53.8890837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8890934Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8891238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8891338Z layer_outputs = layer_module( 2025-08-14T21:52:53.8891620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8891717Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8892025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8892140Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8892452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8892598Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8892922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:52:53.8893055Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:52:53.8893090Z 2025-08-14T21:52:53.8893220Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8893477Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8893561Z return mod(**inputs) 2025-08-14T21:52:53.8893866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8893968Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8894273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8894361Z layer_outputs = layer_module( 2025-08-14T21:52:53.8894648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8894747Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8895053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8895166Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8895469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8895621Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8895920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:52:53.8896026Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:52:53.8896041Z 2025-08-14T21:52:53.8896169Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8896416Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8896510Z return mod(**inputs) 2025-08-14T21:52:53.8896813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8896903Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8897250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8897342Z layer_outputs = layer_module( 2025-08-14T21:52:53.8897638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8897752Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8898193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8898320Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8898624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8898768Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8899075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:52:53.8899186Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:52:53.8899198Z 2025-08-14T21:52:53.8899334Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8899582Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8899664Z return mod(**inputs) 2025-08-14T21:52:53.8899983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:52:53.8900075Z decoder_outputs = self.decoder( 2025-08-14T21:52:53.8900386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:52:53.8900498Z layer_outputs = layer_module( 2025-08-14T21:52:53.8900779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:53.8900912Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:53.8901215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:52:53.8901327Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:52:53.8901634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:52:53.8901776Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:52:53.8902092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:52:53.8902194Z hidden_states = self.wo(hidden_states) 2025-08-14T21:52:53.8902209Z 2025-08-14T21:52:53.8902341Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8902597Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8902682Z return mod(**inputs) 2025-08-14T21:52:53.8903000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1816, in forward 2025-08-14T21:52:53.8903113Z lm_logits = self.lm_head(sequence_output) 2025-08-14T21:52:53.8903126Z 2025-08-14T21:52:53.8903253Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:53.8903511Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:53.8903594Z return mod(**inputs) 2025-08-14T21:52:53.8903900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1823, in forward 2025-08-14T21:52:53.8904091Z loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) 2025-08-14T21:52:53.8904104Z 2025-08-14T21:53:03.2060371Z Compilation time (from dynamo_timed): 31.607747374 2025-08-14T21:53:03.2298133Z pass 2025-08-14T21:53:03.2298748Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:53:03.2300196Z TIMING: _recursive_pre_grad_passes:0.12078 _recursive_joint_graph_passes:1.0775 _recursive_post_grad_passes:0.35814 async_compile.wait:1.01841 code_gen:8.92639 inductor_compile:14.13472 backend_compile:25.97829 gc:0.00096 entire_frame_compile:31.60775 total_wall_time:31.60775 2025-08-14T21:53:03.2301428Z STATS: call_* op count: 1189 | FakeTensorMode.__torch_dispatch__:50742 | FakeTensor.__torch_dispatch__:8076 | ProxyTorchDispatchMode.__torch_dispatch__:12602 2025-08-14T21:53:03.2302132Z Dynamo produced 1 graphs covering 1189 ops with 0 graph breaks (0 unique) 2025-08-14T21:53:09.8363820Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:53:09.8365354Z from pkg_resources import resource_filename 2025-08-14T21:53:10.5688964Z 2025-08-14T21:53:10.5862341Z loading model: 0it [00:00, ?it/s]If you want to use `MegatronBertForCausalLM` as a standalone, add `is_decoder=True.` 2025-08-14T21:53:10.5863118Z WARNING:transformers.models.megatron_bert.modeling_megatron_bert:If you want to use `MegatronBertForCausalLM` as a standalone, add `is_decoder=True.` 2025-08-14T21:53:16.1424960Z 2025-08-14T21:53:16.1425294Z loading model: 0it [00:05, ?it/s] 2025-08-14T21:53:16.1456835Z cpu eval MegatronBertForCausalLM 2025-08-14T21:53:19.0817316Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:53:20.4919393Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:53:21.9065444Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:53:48.3873066Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3873487Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3873843Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3874194Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3874537Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3874864Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3875201Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3875501Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3875800Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3876086Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3876492Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.3877132Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.3877717Z return mod(**inputs) 2025-08-14T21:53:48.3878391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.3879061Z outputs = self.bert( 2025-08-14T21:53:48.3879655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.3884572Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.3885119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.3885659Z layer_outputs = layer_module( 2025-08-14T21:53:48.3886097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.3886613Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.3887152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.3887701Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.3888199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.3888967Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.3889598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.3890290Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.3891000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.3891670Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.3892190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.3892690Z return self.act(input) 2025-08-14T21:53:48.3892858Z 2025-08-14T21:53:48.3892963Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3893221Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3893468Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3893714Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3893961Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3894249Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3894613Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3894864Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3895173Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3895413Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3895654Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3895996Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.3896444Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.3896850Z return mod(**inputs) 2025-08-14T21:53:48.3897410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.3897939Z outputs = self.bert( 2025-08-14T21:53:48.3898431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.3898961Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.3899485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.3900004Z layer_outputs = layer_module( 2025-08-14T21:53:48.3900441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.3900902Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.3901551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.3902351Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.3903137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.3903896Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.3904732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.3905608Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.3906327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.3907068Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.3907659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.3908218Z return self.act(input) 2025-08-14T21:53:48.3908365Z 2025-08-14T21:53:48.3908463Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3908716Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3918520Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3918904Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3919233Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3919542Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3919790Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3920170Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3920496Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3920790Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3921113Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3921479Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.3921984Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.3922662Z return mod(**inputs) 2025-08-14T21:53:48.3923297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.3923958Z outputs = self.bert( 2025-08-14T21:53:48.3924522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.3925053Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.3925571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.3926103Z layer_outputs = layer_module( 2025-08-14T21:53:48.3926580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.3927026Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.3927559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.3928150Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.3928825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.3929451Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.3930133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.3930879Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.3931742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.3932588Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.3933239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.3933847Z return self.act(input) 2025-08-14T21:53:48.3934005Z 2025-08-14T21:53:48.3934172Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3934589Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3934922Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3935217Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3935519Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3935834Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3936128Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3936431Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3936707Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3936995Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3937376Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3937702Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.3938298Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.3938767Z return mod(**inputs) 2025-08-14T21:53:48.3939310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.3939841Z outputs = self.bert( 2025-08-14T21:53:48.3940338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.3940866Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.3941435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.3941970Z layer_outputs = layer_module( 2025-08-14T21:53:48.3942455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.3942899Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.3943430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.3943975Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.3944475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.3944964Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.3945530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.3946133Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.3946733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.3947300Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.3947915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.3948393Z return self.act(input) 2025-08-14T21:53:48.3948556Z 2025-08-14T21:53:48.3949035Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3949392Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3949712Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3950021Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3950301Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3950600Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3950900Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3951189Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3951482Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3951772Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3952062Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3952432Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.3959267Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.3959776Z return mod(**inputs) 2025-08-14T21:53:48.3960388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.3960916Z outputs = self.bert( 2025-08-14T21:53:48.3961500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.3962023Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.3962541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.3963073Z layer_outputs = layer_module( 2025-08-14T21:53:48.3963503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.3963951Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.3964584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.3965125Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.3965616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.3966110Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.3966739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.3967423Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.3968034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.3968607Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.3969086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.3969514Z return self.act(input) 2025-08-14T21:53:48.3969651Z 2025-08-14T21:53:48.3969748Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3970001Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3970254Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3970491Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3970736Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3970986Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3971222Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3971510Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3971755Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3972004Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3972244Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3972563Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.3973017Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.3973419Z return mod(**inputs) 2025-08-14T21:53:48.3973935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.3974456Z outputs = self.bert( 2025-08-14T21:53:48.3974942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.3975469Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.3975990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.3976510Z layer_outputs = layer_module( 2025-08-14T21:53:48.3976934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.3977386Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.3977915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.3978448Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.3978939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.3979435Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.3979995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.3980587Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.3981146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.3985943Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.3986498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.3986918Z return self.act(input) 2025-08-14T21:53:48.3987061Z 2025-08-14T21:53:48.3987158Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3987414Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3987666Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3987930Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3988177Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3988421Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3988660Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3988905Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3989155Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3989396Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3989651Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.3989942Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.3990385Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.3990790Z return mod(**inputs) 2025-08-14T21:53:48.3991292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.3991814Z outputs = self.bert( 2025-08-14T21:53:48.3992300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.3992855Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.3993375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.3993922Z layer_outputs = layer_module( 2025-08-14T21:53:48.3994341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.3994790Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.3995323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.3995854Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.3996439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.3997004Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.3997561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.3998157Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.3998725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.3999308Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.3999798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4000213Z return self.act(input) 2025-08-14T21:53:48.4000363Z 2025-08-14T21:53:48.4000461Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4000721Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4000959Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4001278Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4001522Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4001759Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4002002Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4002245Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4002498Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4002734Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4002982Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4003289Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4003729Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4004128Z return mod(**inputs) 2025-08-14T21:53:48.4004628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4005160Z outputs = self.bert( 2025-08-14T21:53:48.4005651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4006179Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4006697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4007217Z layer_outputs = layer_module( 2025-08-14T21:53:48.4007647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4008098Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4008630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4009161Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4009663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4010158Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4014966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4015616Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4016210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4016789Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4017258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4017690Z return self.act(input) 2025-08-14T21:53:48.4017829Z 2025-08-14T21:53:48.4017937Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4018191Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4018435Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4018677Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4018928Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4019163Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4019457Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4019710Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4019958Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4020204Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4020448Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4020720Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4021171Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4021575Z return mod(**inputs) 2025-08-14T21:53:48.4022075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4022583Z outputs = self.bert( 2025-08-14T21:53:48.4023072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4023602Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4024110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4024633Z layer_outputs = layer_module( 2025-08-14T21:53:48.4025150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4025642Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4026167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4026729Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4027236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4027781Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4028332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4028936Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4029497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4030062Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4030521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4030943Z return self.act(input) 2025-08-14T21:53:48.4031079Z 2025-08-14T21:53:48.4031185Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4031428Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4031674Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4031952Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4032188Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4032432Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4032699Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4032947Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4033183Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4033436Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4033679Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4033957Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4034409Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4034812Z return mod(**inputs) 2025-08-14T21:53:48.4035304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4035832Z outputs = self.bert( 2025-08-14T21:53:48.4036318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4036840Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4037356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4037881Z layer_outputs = layer_module( 2025-08-14T21:53:48.4038317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4038760Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4039289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4044093Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4044606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4045102Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4045668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4046282Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4046892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4047459Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4047933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4048374Z return self.act(input) 2025-08-14T21:53:48.4048511Z 2025-08-14T21:53:48.4048614Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4049192Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4049445Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4049695Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4049932Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4050183Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4050435Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4050673Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4050921Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4051167Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4051403Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4051680Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4052128Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4052535Z return mod(**inputs) 2025-08-14T21:53:48.4053026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4053606Z outputs = self.bert( 2025-08-14T21:53:48.4054165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4054781Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4055310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4055834Z layer_outputs = layer_module( 2025-08-14T21:53:48.4056269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4056710Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4057243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4057781Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4058294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4058834Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4059393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4059990Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4060541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4061113Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4061583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4062001Z return self.act(input) 2025-08-14T21:53:48.4062136Z 2025-08-14T21:53:48.4062237Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4062492Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4062736Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4062983Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4063230Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4063475Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4063728Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4063997Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4064244Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4064486Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4064725Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4065001Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4065476Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4065868Z return mod(**inputs) 2025-08-14T21:53:48.4066370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4066894Z outputs = self.bert( 2025-08-14T21:53:48.4067380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4067896Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4068421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4077379Z layer_outputs = layer_module( 2025-08-14T21:53:48.4077937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4078523Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4079242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4079903Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4080395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4080911Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4081547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4082151Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4082705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4085465Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4085937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4086360Z return self.act(input) 2025-08-14T21:53:48.4086498Z 2025-08-14T21:53:48.4086594Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4086848Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4087096Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4087335Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4087587Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4087881Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4088117Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4088361Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4088604Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4088846Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4089090Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4089373Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4089824Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4090224Z return mod(**inputs) 2025-08-14T21:53:48.4090720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4091244Z outputs = self.bert( 2025-08-14T21:53:48.4091728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4092290Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4092816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4093348Z layer_outputs = layer_module( 2025-08-14T21:53:48.4093797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4094248Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4094773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4095316Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4095809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4096320Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4096883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4097477Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4098170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4098740Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4099214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4099654Z return self.act(input) 2025-08-14T21:53:48.4099798Z 2025-08-14T21:53:48.4099895Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4100149Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4100417Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4100664Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4100910Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4101152Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4101388Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4101627Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4101867Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4102101Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4102343Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4102624Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4103064Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4103474Z return mod(**inputs) 2025-08-14T21:53:48.4103972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4104494Z outputs = self.bert( 2025-08-14T21:53:48.4104977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4105505Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4106032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4106549Z layer_outputs = layer_module( 2025-08-14T21:53:48.4106979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4107429Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4107958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4108488Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4108993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4109483Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4110063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4110657Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4111239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4111817Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4116547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4117023Z return self.act(input) 2025-08-14T21:53:48.4117171Z 2025-08-14T21:53:48.4117269Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4117525Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4117768Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4118015Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4118273Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4118516Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4118759Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4119005Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4119241Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4119484Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4119724Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4120001Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4120474Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4120883Z return mod(**inputs) 2025-08-14T21:53:48.4121467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4122019Z outputs = self.bert( 2025-08-14T21:53:48.4122511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4123041Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4123559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4124076Z layer_outputs = layer_module( 2025-08-14T21:53:48.4124516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4124973Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4125508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4126042Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4126618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4127164Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4127713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4128308Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4128869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4129438Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4129905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4130329Z return self.act(input) 2025-08-14T21:53:48.4130467Z 2025-08-14T21:53:48.4130574Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4130819Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4131069Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4131346Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4131594Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4131828Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4132077Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4132319Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4132580Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4132824Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4133066Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4133343Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4133789Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4134191Z return mod(**inputs) 2025-08-14T21:53:48.4134690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4135210Z outputs = self.bert( 2025-08-14T21:53:48.4135704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4136225Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4136737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4137268Z layer_outputs = layer_module( 2025-08-14T21:53:48.4137696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4138177Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4138698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4139261Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4139762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4140251Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4140798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4145661Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4146250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4146820Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4147290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4147716Z return self.act(input) 2025-08-14T21:53:48.4147853Z 2025-08-14T21:53:48.4147955Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4148199Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4148443Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4149000Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4149253Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4149497Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4149743Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4149987Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4150223Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4150466Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4150720Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4150991Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4151438Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4151848Z return mod(**inputs) 2025-08-14T21:53:48.4152428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4152946Z outputs = self.bert( 2025-08-14T21:53:48.4153436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4153961Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4154503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4155027Z layer_outputs = layer_module( 2025-08-14T21:53:48.4155461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4155993Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4156549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4157087Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4157593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4158078Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4158636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4159249Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4159811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4160411Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4160877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4161429Z return self.act(input) 2025-08-14T21:53:48.4161566Z 2025-08-14T21:53:48.4161675Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4161922Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4162190Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4162445Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4162685Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4162944Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4163209Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4163450Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4163699Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4163951Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4164186Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4164467Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4164920Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4165328Z return mod(**inputs) 2025-08-14T21:53:48.4165820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4166339Z outputs = self.bert( 2025-08-14T21:53:48.4166826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4167340Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4167864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4168391Z layer_outputs = layer_module( 2025-08-14T21:53:48.4168817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4169270Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4169803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4174581Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4175110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4175600Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4176184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4176784Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4177341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4177917Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4178388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4178867Z return self.act(input) 2025-08-14T21:53:48.4179009Z 2025-08-14T21:53:48.4179106Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4179365Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4179620Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4179858Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4180106Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4180358Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4180602Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4180847Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4181121Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4181363Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4181601Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4181909Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4182361Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4182757Z return mod(**inputs) 2025-08-14T21:53:48.4183265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4183791Z outputs = self.bert( 2025-08-14T21:53:48.4184290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4192923Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4193585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4194140Z layer_outputs = layer_module( 2025-08-14T21:53:48.4194585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4195061Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4195604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4196150Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4196669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4197173Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4197735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4198350Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4198929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4203920Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4204401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4204941Z return self.act(input) 2025-08-14T21:53:48.4205095Z 2025-08-14T21:53:48.4205202Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4205477Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4205731Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4205988Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4206286Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4206528Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4206778Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4207032Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4207268Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4207518Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4207769Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4208052Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4208498Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4208909Z return mod(**inputs) 2025-08-14T21:53:48.4209421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4209938Z outputs = self.bert( 2025-08-14T21:53:48.4210444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4210973Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4211526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4212051Z layer_outputs = layer_module( 2025-08-14T21:53:48.4212520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4212974Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4213509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4214185Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4214694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4215196Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4215753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4216365Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4216930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4217509Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4217982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4218464Z return self.act(input) 2025-08-14T21:53:48.4218606Z 2025-08-14T21:53:48.4218717Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4219025Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4219282Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4219525Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4219774Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4220021Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4220262Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4220517Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4220769Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4221029Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4221269Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4221557Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4222041Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4222445Z return mod(**inputs) 2025-08-14T21:53:48.4222948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4223475Z outputs = self.bert( 2025-08-14T21:53:48.4223987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4224523Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4225044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4225578Z layer_outputs = layer_module( 2025-08-14T21:53:48.4226007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4226457Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4226992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4227538Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4236270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4236953Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4237552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4238174Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4238777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4239353Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4239832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4240249Z return self.act(input) 2025-08-14T21:53:48.4240394Z 2025-08-14T21:53:48.4240492Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4240750Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4241010Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4241318Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4241569Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4241822Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4242064Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4242314Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4244767Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4245013Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4245260Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4245544Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4245989Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4246401Z return mod(**inputs) 2025-08-14T21:53:48.4246906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4247494Z outputs = self.bert( 2025-08-14T21:53:48.4247981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4248514Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4249433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4249968Z layer_outputs = layer_module( 2025-08-14T21:53:48.4250470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4250929Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4251464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4252002Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4252544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4253044Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4253614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4254215Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4254777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4255358Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4255832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4256248Z return self.act(input) 2025-08-14T21:53:48.4256398Z 2025-08-14T21:53:48.4256496Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4256760Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4257000Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4257330Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4257667Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4257907Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4258154Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4258431Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4258675Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4258912Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4259160Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4259447Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4259888Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4260297Z return mod(**inputs) 2025-08-14T21:53:48.4260799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4261318Z outputs = self.bert( 2025-08-14T21:53:48.4261800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4262329Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4262851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4263371Z layer_outputs = layer_module( 2025-08-14T21:53:48.4263804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4264257Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4264786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4265315Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4265815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4266309Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4266867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4267462Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4268043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4268609Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4269085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4269500Z return self.act(input) 2025-08-14T21:53:48.4269645Z 2025-08-14T21:53:48.4269763Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4270020Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4270260Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4270509Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4270759Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4270998Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4271246Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4271489Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4275918Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4276218Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4276467Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4276748Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4277197Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4277609Z return mod(**inputs) 2025-08-14T21:53:48.4278113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:53:48.4278657Z outputs = self.bert( 2025-08-14T21:53:48.4279154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:53:48.4279730Z encoder_outputs = self.encoder( 2025-08-14T21:53:48.4280252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:53:48.4280778Z layer_outputs = layer_module( 2025-08-14T21:53:48.4281286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:53:48.4281732Z return super().__call__(*args, **kwargs) 2025-08-14T21:53:48.4282264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:53:48.4282799Z layer_output = apply_chunking_to_forward( 2025-08-14T21:53:48.4283300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:53:48.4283791Z return forward_fn(*input_tensors) 2025-08-14T21:53:48.4284341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:53:48.4284956Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:53:48.4285532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:53:48.4286181Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:48.4286703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:53:48.4287127Z return self.act(input) 2025-08-14T21:53:48.4287264Z 2025-08-14T21:53:48.4287367Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4287611Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4287860Z cudagraph partition due to non gpu ops 2025-08-14T21:53:48.4288141Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:48.4288595Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:48.4288990Z return mod(**inputs) 2025-08-14T21:53:48.4289524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1086, in forward 2025-08-14T21:53:48.4290053Z lm_loss = self.loss_function( 2025-08-14T21:53:48.4290506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-08-14T21:53:48.4291122Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-08-14T21:53:48.4291727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-08-14T21:53:48.4292372Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-08-14T21:53:48.4292687Z 2025-08-14T21:53:59.0379721Z Compilation time (from dynamo_timed): 34.519951815 2025-08-14T21:53:59.0477578Z pass 2025-08-14T21:53:59.0479089Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:53:59.0480386Z TIMING: _recursive_pre_grad_passes:0.09938 _recursive_joint_graph_passes:1.10353 _recursive_post_grad_passes:0.18285 async_compile.wait:1.07989 code_gen:8.42487 inductor_compile:14.00906 backend_compile:27.00479 gc:0.00095 entire_frame_compile:34.51995 total_wall_time:34.51995 2025-08-14T21:53:59.0481668Z STATS: call_* op count: 723 | FakeTensorMode.__torch_dispatch__:51441 | FakeTensor.__torch_dispatch__:7316 | ProxyTorchDispatchMode.__torch_dispatch__:12522 2025-08-14T21:53:59.0482297Z Dynamo produced 1 graphs covering 723 ops with 0 graph breaks (0 unique) 2025-08-14T21:54:05.7164863Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:54:05.7166068Z from pkg_resources import resource_filename 2025-08-14T21:54:06.4675724Z 2025-08-14T21:54:11.4308910Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:54:11.4309259Z loading model: 0it [00:04, ?it/s] 2025-08-14T21:54:11.4339995Z cpu eval MegatronBertForQuestionAnswering 2025-08-14T21:54:14.3002206Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:54:15.6041544Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:54:16.9772214Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:54:43.2224772Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2225135Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2225446Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2225850Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2226120Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2226361Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2226625Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2231149Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2231482Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2231734Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2232018Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2232577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2232993Z return mod(**inputs) 2025-08-14T21:54:43.2233557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2234170Z outputs = self.bert( 2025-08-14T21:54:43.2234680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2235294Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2236108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2236646Z layer_outputs = layer_module( 2025-08-14T21:54:43.2237173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2237629Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2238285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2238865Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2239460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2239965Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2240617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2241457Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2242088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2242663Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2243151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2243571Z return self.act(input) 2025-08-14T21:54:43.2243777Z 2025-08-14T21:54:43.2243879Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2244136Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2244377Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2244671Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2244954Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2245201Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2245445Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2245681Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2245989Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2246234Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2246478Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2246752Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2247207Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2247615Z return mod(**inputs) 2025-08-14T21:54:43.2248173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2249018Z outputs = self.bert( 2025-08-14T21:54:43.2249613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2250150Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2250780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2251313Z layer_outputs = layer_module( 2025-08-14T21:54:43.2251847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2252296Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2252940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2253487Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2254090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2254584Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2255350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2264417Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2265282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2265922Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2266505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2266933Z return self.act(input) 2025-08-14T21:54:43.2267078Z 2025-08-14T21:54:43.2267185Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2267445Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2267783Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2268083Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2268331Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2268636Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2268930Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2269329Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2269635Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2269887Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2270267Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2270702Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2271162Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2271673Z return mod(**inputs) 2025-08-14T21:54:43.2272173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2272743Z outputs = self.bert( 2025-08-14T21:54:43.2273237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2273760Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2274292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2274926Z layer_outputs = layer_module( 2025-08-14T21:54:43.2275530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2276287Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2277026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2277773Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2278528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2279204Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2279852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2280576Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2281324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2282053Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2282631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2283263Z return self.act(input) 2025-08-14T21:54:43.2283403Z 2025-08-14T21:54:43.2283550Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2283828Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2284138Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2284434Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2284820Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2285079Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2285366Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2285600Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2285843Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2286094Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2286357Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2286639Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2287091Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2287489Z return mod(**inputs) 2025-08-14T21:54:43.2287991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2288510Z outputs = self.bert( 2025-08-14T21:54:43.2289061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2289648Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2290272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2290927Z layer_outputs = layer_module( 2025-08-14T21:54:43.2291514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2292068Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2292721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2293453Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2294216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2294899Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2295621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2296411Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2297133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2297851Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2298456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2298985Z return self.act(input) 2025-08-14T21:54:43.2305567Z 2025-08-14T21:54:43.2305672Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2305926Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2306181Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2306420Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2306673Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2306928Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2307168Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2307421Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2307681Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2308000Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2308241Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2308523Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2308972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2309375Z return mod(**inputs) 2025-08-14T21:54:43.2309873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2310399Z outputs = self.bert( 2025-08-14T21:54:43.2311002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2311523Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2312044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2312595Z layer_outputs = layer_module( 2025-08-14T21:54:43.2313025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2313478Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2314108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2314669Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2315168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2315658Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2316267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2316871Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2317426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2318027Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2318497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2318935Z return self.act(input) 2025-08-14T21:54:43.2319077Z 2025-08-14T21:54:43.2319172Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2319419Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2319667Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2319904Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2320148Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2320393Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2320632Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2320875Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2321221Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2321465Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2321708Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2321991Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2322434Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2322843Z return mod(**inputs) 2025-08-14T21:54:43.2323338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2323870Z outputs = self.bert( 2025-08-14T21:54:43.2324353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2324888Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2325413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2325939Z layer_outputs = layer_module( 2025-08-14T21:54:43.2326368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2326815Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2327349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2327886Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2332632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2333145Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2333720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2334343Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2334908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2335486Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2335976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2336404Z return self.act(input) 2025-08-14T21:54:43.2336548Z 2025-08-14T21:54:43.2336645Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2336901Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2337143Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2337387Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2337630Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2337867Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2338109Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2338475Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2338719Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2338981Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2339229Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2339506Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2339946Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2340372Z return mod(**inputs) 2025-08-14T21:54:43.2340870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2341396Z outputs = self.bert( 2025-08-14T21:54:43.2341876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2342402Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2342994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2343568Z layer_outputs = layer_module( 2025-08-14T21:54:43.2344001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2344453Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2344984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2345525Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2346031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2346530Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2347094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2347747Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2348311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2349261Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2349732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2350157Z return self.act(input) 2025-08-14T21:54:43.2350303Z 2025-08-14T21:54:43.2350470Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2350735Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2350982Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2351225Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2351477Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2351772Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2352036Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2352296Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2352539Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2352785Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2353031Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2353311Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2353757Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2354165Z return mod(**inputs) 2025-08-14T21:54:43.2354668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2355193Z outputs = self.bert( 2025-08-14T21:54:43.2355682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2356207Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2356726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2361348Z layer_outputs = layer_module( 2025-08-14T21:54:43.2361779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2362272Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2362799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2363347Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2363850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2364347Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2364898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2365500Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2366060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2366631Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2367103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2367530Z return self.act(input) 2025-08-14T21:54:43.2367667Z 2025-08-14T21:54:43.2367770Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2368014Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2368264Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2368511Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2368756Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2368999Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2369244Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2369497Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2369735Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2369979Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2370232Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2370507Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2370960Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2371394Z return mod(**inputs) 2025-08-14T21:54:43.2371973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2372552Z outputs = self.bert( 2025-08-14T21:54:43.2373071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2373604Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2374120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2374650Z layer_outputs = layer_module( 2025-08-14T21:54:43.2375076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2375533Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2376057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2376657Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2377159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2377649Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2378201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2378824Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2379381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2379972Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2380445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2380865Z return self.act(input) 2025-08-14T21:54:43.2381000Z 2025-08-14T21:54:43.2381104Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2381346Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2381593Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2381836Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2382076Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2382320Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2382562Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2382797Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2383045Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2383284Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2383530Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2383803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2384251Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2384655Z return mod(**inputs) 2025-08-14T21:54:43.2385144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2385664Z outputs = self.bert( 2025-08-14T21:54:43.2390511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2391049Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2391565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2392087Z layer_outputs = layer_module( 2025-08-14T21:54:43.2392522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2392967Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2393534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2394078Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2394603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2395093Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2395652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2396256Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2396825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2397399Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2397877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2398300Z return self.act(input) 2025-08-14T21:54:43.2398442Z 2025-08-14T21:54:43.2398543Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2398797Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2399045Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2399292Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2399537Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2399816Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2400065Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2400302Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2400546Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2400884Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2401201Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2401487Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2401934Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2402335Z return mod(**inputs) 2025-08-14T21:54:43.2402826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2403350Z outputs = self.bert( 2025-08-14T21:54:43.2403841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2404359Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2404882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2405455Z layer_outputs = layer_module( 2025-08-14T21:54:43.2405883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2406322Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2406852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2407388Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2407884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2408371Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2408925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2409520Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2410080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2410697Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2411166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2411582Z return self.act(input) 2025-08-14T21:54:43.2411718Z 2025-08-14T21:54:43.2411813Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2412106Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2412358Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2412603Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2412848Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2413092Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2413328Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2413571Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2413814Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2414060Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2414296Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2414586Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2415035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2423940Z return mod(**inputs) 2025-08-14T21:54:43.2424556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2425082Z outputs = self.bert( 2025-08-14T21:54:43.2425579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2426127Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2426645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2427199Z layer_outputs = layer_module( 2025-08-14T21:54:43.2427622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2428071Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2428601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2429139Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2431753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2432255Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2432807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2433403Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2433957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2434565Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2435036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2435446Z return self.act(input) 2025-08-14T21:54:43.2435594Z 2025-08-14T21:54:43.2435692Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2435939Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2436185Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2436425Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2436663Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2436901Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2437141Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2437378Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2437623Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2437896Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2438138Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2438462Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2438909Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2439307Z return mod(**inputs) 2025-08-14T21:54:43.2439829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2440360Z outputs = self.bert( 2025-08-14T21:54:43.2440841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2441442Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2441963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2442489Z layer_outputs = layer_module( 2025-08-14T21:54:43.2442916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2443369Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2443902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2444514Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2445071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2445600Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2446160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2446779Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2447346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2447917Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2448392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2449138Z return self.act(input) 2025-08-14T21:54:43.2449287Z 2025-08-14T21:54:43.2449386Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2449649Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2449893Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2450147Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2450407Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2450651Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2450907Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2451161Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2451406Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2451642Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2451881Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2452162Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2452600Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2453005Z return mod(**inputs) 2025-08-14T21:54:43.2453502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2454020Z outputs = self.bert( 2025-08-14T21:54:43.2454500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2455025Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2455610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2456130Z layer_outputs = layer_module( 2025-08-14T21:54:43.2456557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2456999Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2457560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2458094Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2458599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2463292Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2463850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2464449Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2465010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2465580Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2466049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2466475Z return self.act(input) 2025-08-14T21:54:43.2466620Z 2025-08-14T21:54:43.2466717Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2467010Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2467252Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2467503Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2467782Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2468021Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2468263Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2468511Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2468746Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2469000Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2469248Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2469524Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2469978Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2470381Z return mod(**inputs) 2025-08-14T21:54:43.2470879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2471393Z outputs = self.bert( 2025-08-14T21:54:43.2471882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2472407Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2481265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2481835Z layer_outputs = layer_module( 2025-08-14T21:54:43.2482283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2482752Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2483298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2483860Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2484372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2484867Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2485431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2487333Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2492249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2492846Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2493381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2493814Z return self.act(input) 2025-08-14T21:54:43.2493963Z 2025-08-14T21:54:43.2494066Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2494326Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2494579Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2494824Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2495073Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2495324Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2495577Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2495814Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2496066Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2496313Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2496551Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2496841Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2497304Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2497709Z return mod(**inputs) 2025-08-14T21:54:43.2498243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2498767Z outputs = self.bert( 2025-08-14T21:54:43.2499291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2499815Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2500345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2500875Z layer_outputs = layer_module( 2025-08-14T21:54:43.2501301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2501752Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2502361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2502971Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2503472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2503967Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2504540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2505146Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2505701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2506274Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2506801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2507232Z return self.act(input) 2025-08-14T21:54:43.2507373Z 2025-08-14T21:54:43.2507472Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2507728Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2507978Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2508217Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2508472Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2508748Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2508988Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2509232Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2509477Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2509715Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2509963Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2510279Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2510743Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2511149Z return mod(**inputs) 2025-08-14T21:54:43.2511658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2512196Z outputs = self.bert( 2025-08-14T21:54:43.2512684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2513217Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2513744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2514272Z layer_outputs = layer_module( 2025-08-14T21:54:43.2514698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2515153Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2515728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2516270Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2520984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2521575Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2522149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2522754Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2523324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2523898Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2524377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2524794Z return self.act(input) 2025-08-14T21:54:43.2524941Z 2025-08-14T21:54:43.2525045Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2525314Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2525555Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2525808Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2526055Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2526306Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2526543Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2526791Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2527032Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2527273Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2527516Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2527800Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2528240Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2528645Z return mod(**inputs) 2025-08-14T21:54:43.2529148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2529678Z outputs = self.bert( 2025-08-14T21:54:43.2530195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2530725Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2531325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2531900Z layer_outputs = layer_module( 2025-08-14T21:54:43.2532360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2532820Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2533357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2533894Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2534403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2534904Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2535471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2536124Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2536696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2537270Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2537777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2538195Z return self.act(input) 2025-08-14T21:54:43.2538367Z 2025-08-14T21:54:43.2538468Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2538730Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2538969Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2539227Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2539473Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2539711Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2539956Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2540205Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2540442Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2540693Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2540940Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2541224Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2541668Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2542074Z return mod(**inputs) 2025-08-14T21:54:43.2542572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2543094Z outputs = self.bert( 2025-08-14T21:54:43.2543579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2544103Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2544623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2545138Z layer_outputs = layer_module( 2025-08-14T21:54:43.2545568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2550475Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2551020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2551557Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2552126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2552622Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2553186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2553779Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2554373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2554950Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2555415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2555834Z return self.act(input) 2025-08-14T21:54:43.2555975Z 2025-08-14T21:54:43.2556071Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2556321Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2556561Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2556801Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2557045Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2557278Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2557518Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2557767Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2558009Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2558251Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2558525Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2558807Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2559247Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2559678Z return mod(**inputs) 2025-08-14T21:54:43.2560255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2560827Z outputs = self.bert( 2025-08-14T21:54:43.2561413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2561937Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2562462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2562978Z layer_outputs = layer_module( 2025-08-14T21:54:43.2563412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2563860Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2564384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2564981Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2565481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2565971Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2566520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2567124Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2567684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2568257Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2568720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2569143Z return self.act(input) 2025-08-14T21:54:43.2569279Z 2025-08-14T21:54:43.2569383Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2569655Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2569906Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2570150Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2570393Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2570629Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2570891Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2571138Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2571368Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2571614Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2571855Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2572125Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2572569Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2572978Z return mod(**inputs) 2025-08-14T21:54:43.2573468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2573984Z outputs = self.bert( 2025-08-14T21:54:43.2574468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2583456Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2584160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2584903Z layer_outputs = layer_module( 2025-08-14T21:54:43.2585474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2585914Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2586474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2587012Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2587504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2587988Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2588546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2591300Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2591863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2592430Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2592900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2593318Z return self.act(input) 2025-08-14T21:54:43.2593455Z 2025-08-14T21:54:43.2593555Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2593841Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2594082Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2594326Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2594563Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2594805Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2595039Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2595284Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2595525Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2595762Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2595996Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2596272Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2596717Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2597110Z return mod(**inputs) 2025-08-14T21:54:43.2597632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2598205Z outputs = self.bert( 2025-08-14T21:54:43.2598694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2599247Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2599771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2600293Z layer_outputs = layer_module( 2025-08-14T21:54:43.2600715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2601235Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2601772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2602309Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2602801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2603294Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2603922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2604601Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2605157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2605753Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2606220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2606631Z return self.act(input) 2025-08-14T21:54:43.2606771Z 2025-08-14T21:54:43.2606866Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2607112Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2607352Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2607587Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2607825Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2608069Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2608300Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2608536Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2608779Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2609011Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2609250Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2609524Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2609967Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2610360Z return mod(**inputs) 2025-08-14T21:54:43.2610853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2611362Z outputs = self.bert( 2025-08-14T21:54:43.2611839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2612355Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2612871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2613385Z layer_outputs = layer_module( 2025-08-14T21:54:43.2613803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2614248Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2614796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2615325Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2615821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2616341Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2616897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2617490Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2618046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2622884Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2623358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2623771Z return self.act(input) 2025-08-14T21:54:43.2623914Z 2025-08-14T21:54:43.2624010Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2624266Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2624507Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2624754Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2624998Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2625233Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2625503Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2625748Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2625984Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2626223Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2626484Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2626761Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2627202Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2627607Z return mod(**inputs) 2025-08-14T21:54:43.2628111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:54:43.2628629Z outputs = self.bert( 2025-08-14T21:54:43.2629118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:54:43.2629637Z encoder_outputs = self.encoder( 2025-08-14T21:54:43.2630152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:54:43.2630666Z layer_outputs = layer_module( 2025-08-14T21:54:43.2631095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:54:43.2631537Z return super().__call__(*args, **kwargs) 2025-08-14T21:54:43.2632062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:54:43.2632592Z layer_output = apply_chunking_to_forward( 2025-08-14T21:54:43.2633210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:54:43.2633703Z return forward_fn(*input_tensors) 2025-08-14T21:54:43.2634251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:54:43.2634847Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:54:43.2635409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:54:43.2635972Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:43.2636459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:54:43.2636872Z return self.act(input) 2025-08-14T21:54:43.2637015Z 2025-08-14T21:54:43.2637110Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2637355Z cudagraph partition due to non gpu ops 2025-08-14T21:54:43.2637649Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2638088Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2638488Z return mod(**inputs) 2025-08-14T21:54:43.2638976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1629, in forward 2025-08-14T21:54:43.2639582Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:54:43.2639785Z 2025-08-14T21:54:43.2639912Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:43.2640351Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:43.2640742Z return mod(**inputs) 2025-08-14T21:54:43.2641330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1630, in forward 2025-08-14T21:54:43.2641879Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:54:43.2642061Z 2025-08-14T21:54:52.0355041Z Compilation time (from dynamo_timed): 32.553257814 2025-08-14T21:54:52.0355411Z pass 2025-08-14T21:54:52.0355957Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:54:52.0356958Z TIMING: _recursive_pre_grad_passes:0.09626 _recursive_joint_graph_passes:1.44301 _recursive_post_grad_passes:0.18121 async_compile.wait:0.00461 code_gen:6.7163 inductor_compile:12.22977 backend_compile:25.20737 gc:0.00031 entire_frame_compile:32.55326 total_wall_time:32.55326 2025-08-14T21:54:52.0366758Z STATS: call_* op count: 724 | FakeTensorMode.__torch_dispatch__:51314 | FakeTensor.__torch_dispatch__:7334 | ProxyTorchDispatchMode.__torch_dispatch__:12549 2025-08-14T21:54:52.0367617Z Dynamo produced 1 graphs covering 724 ops with 0 graph breaks (0 unique) 2025-08-14T21:54:58.5770736Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:54:58.5771843Z from pkg_resources import resource_filename 2025-08-14T21:54:59.2853622Z 2025-08-14T21:55:00.5664053Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:55:00.5664402Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:55:00.5760358Z cpu eval MobileBertForMaskedLM 2025-08-14T21:55:01.1269649Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:55:01.4650440Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:55:01.7996611Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:55:49.9882108Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:49.9882733Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:49.9883157Z return mod(**inputs) 2025-08-14T21:55:49.9883695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:49.9884223Z outputs = self.mobilebert( 2025-08-14T21:55:49.9884739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 791, in forward 2025-08-14T21:55:49.9885286Z embedding_output = self.embeddings( 2025-08-14T21:55:49.9886071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 199, in forward 2025-08-14T21:55:49.9886578Z inputs_embeds = torch.cat( 2025-08-14T21:55:49.9886734Z 2025-08-14T21:55:49.9886838Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9889383Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:49.9889991Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:49.9890437Z return mod(**inputs) 2025-08-14T21:55:49.9891038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:49.9891670Z outputs = self.mobilebert( 2025-08-14T21:55:49.9892222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 791, in forward 2025-08-14T21:55:49.9892865Z embedding_output = self.embeddings( 2025-08-14T21:55:49.9893433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 215, in forward 2025-08-14T21:55:49.9902689Z embeddings = self.LayerNorm(embeddings) 2025-08-14T21:55:49.9903383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:49.9903960Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:49.9904166Z 2025-08-14T21:55:49.9904288Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9904718Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:49.9905235Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:49.9905795Z return mod(**inputs) 2025-08-14T21:55:49.9906283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:49.9906797Z outputs = self.mobilebert( 2025-08-14T21:55:49.9907333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:49.9907950Z encoder_outputs = self.encoder( 2025-08-14T21:55:49.9910637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:49.9911142Z layer_outputs = layer_module( 2025-08-14T21:55:49.9911640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:49.9912266Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:49.9912895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:49.9913446Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:49.9914009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:49.9914647Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:49.9915362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:49.9916009Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:49.9916254Z 2025-08-14T21:55:49.9916410Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9916702Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9916995Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9917311Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9917620Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9917967Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9918267Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9918670Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9919063Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9919558Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9919989Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:49.9920605Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:49.9921190Z return mod(**inputs) 2025-08-14T21:55:49.9921742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:49.9922382Z outputs = self.mobilebert( 2025-08-14T21:55:49.9923164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:49.9923676Z encoder_outputs = self.encoder( 2025-08-14T21:55:49.9924188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:49.9924697Z layer_outputs = layer_module( 2025-08-14T21:55:49.9925194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:49.9925709Z self_attention_outputs = self.attention( 2025-08-14T21:55:49.9926231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:49.9926808Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:49.9927413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:49.9928089Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:49.9928826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:49.9929364Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:49.9929553Z 2025-08-14T21:55:49.9929652Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9929995Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:49.9930520Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:49.9931000Z return mod(**inputs) 2025-08-14T21:55:49.9931547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:49.9932233Z outputs = self.mobilebert( 2025-08-14T21:55:49.9932840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:49.9933457Z encoder_outputs = self.encoder( 2025-08-14T21:55:49.9934037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:49.9934555Z layer_outputs = layer_module( 2025-08-14T21:55:49.9935130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:49.9935939Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:49.9936747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:49.9941716Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:49.9942290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:49.9942844Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:49.9943074Z 2025-08-14T21:55:49.9943174Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9943571Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:49.9944339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:49.9944962Z return mod(**inputs) 2025-08-14T21:55:49.9945564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:49.9946418Z outputs = self.mobilebert( 2025-08-14T21:55:49.9947034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:49.9947654Z encoder_outputs = self.encoder( 2025-08-14T21:55:49.9948312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:49.9949285Z layer_outputs = layer_module( 2025-08-14T21:55:49.9949865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:49.9950507Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:49.9951293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:49.9952067Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:49.9952648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:49.9953227Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:49.9953873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:49.9954443Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:49.9954637Z 2025-08-14T21:55:49.9954737Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9955025Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:49.9955474Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:49.9955965Z return mod(**inputs) 2025-08-14T21:55:49.9956447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:49.9957088Z outputs = self.mobilebert( 2025-08-14T21:55:49.9957723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:49.9958349Z encoder_outputs = self.encoder( 2025-08-14T21:55:49.9958938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:49.9959655Z layer_outputs = layer_module( 2025-08-14T21:55:49.9960232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:49.9960873Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:49.9961591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:49.9962299Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:49.9962938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:49.9963605Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:49.9963829Z 2025-08-14T21:55:49.9963987Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9964329Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:49.9964866Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:49.9965354Z return mod(**inputs) 2025-08-14T21:55:49.9971233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:49.9971916Z outputs = self.mobilebert( 2025-08-14T21:55:49.9972506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:49.9973117Z encoder_outputs = self.encoder( 2025-08-14T21:55:49.9973835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:49.9974449Z layer_outputs = layer_module( 2025-08-14T21:55:49.9975063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:49.9975698Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:49.9976461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:49.9977352Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:49.9978199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:49.9978930Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:49.9979671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:49.9980425Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:49.9980669Z 2025-08-14T21:55:49.9980781Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9981152Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:49.9981622Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:49.9982024Z return mod(**inputs) 2025-08-14T21:55:49.9982507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:49.9983010Z outputs = self.mobilebert( 2025-08-14T21:55:49.9983502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:49.9984014Z encoder_outputs = self.encoder( 2025-08-14T21:55:49.9984526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:49.9985082Z layer_outputs = layer_module( 2025-08-14T21:55:49.9985669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:49.9986375Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:49.9987018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:49.9987716Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:49.9988384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:49.9989057Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:49.9989268Z 2025-08-14T21:55:49.9989375Z cudagraph partition due to non gpu ops 2025-08-14T21:55:49.9989770Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:49.9990311Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:49.9990788Z return mod(**inputs) 2025-08-14T21:55:49.9991435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:49.9992115Z outputs = self.mobilebert( 2025-08-14T21:55:49.9992826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:49.9993571Z encoder_outputs = self.encoder( 2025-08-14T21:55:49.9994312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:49.9995010Z layer_outputs = layer_module( 2025-08-14T21:55:49.9999824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0000368Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0000906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0001568Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0002153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0002717Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0003292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0004015Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0004280Z 2025-08-14T21:55:50.0004411Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0004799Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0005469Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0005978Z return mod(**inputs) 2025-08-14T21:55:50.0006632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0007398Z outputs = self.mobilebert( 2025-08-14T21:55:50.0007969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0008621Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0009117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0009801Z layer_outputs = layer_module( 2025-08-14T21:55:50.0010396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0010965Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0011525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0012083Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0012289Z 2025-08-14T21:55:50.0012393Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0012678Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0013119Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0013516Z return mod(**inputs) 2025-08-14T21:55:50.0013999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0014505Z outputs = self.mobilebert( 2025-08-14T21:55:50.0015000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0015508Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0016004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0016511Z layer_outputs = layer_module( 2025-08-14T21:55:50.0017039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0017657Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0018433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0019328Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0020133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0020958Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0021157Z 2025-08-14T21:55:50.0021295Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0021682Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0022345Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0022958Z return mod(**inputs) 2025-08-14T21:55:50.0023591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0024284Z outputs = self.mobilebert( 2025-08-14T21:55:50.0033431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0034249Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0035026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0035686Z layer_outputs = layer_module( 2025-08-14T21:55:50.0036288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0037053Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0037781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0038475Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0039197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0039766Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0040341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0040995Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0041260Z 2025-08-14T21:55:50.0041365Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0041649Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0042087Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0042493Z return mod(**inputs) 2025-08-14T21:55:50.0042973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0043483Z outputs = self.mobilebert( 2025-08-14T21:55:50.0043973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0044484Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0044982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0045496Z layer_outputs = layer_module( 2025-08-14T21:55:50.0045987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0046604Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0047257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0047870Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0048550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0049469Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0050110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0050865Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0051106Z 2025-08-14T21:55:50.0051220Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0051537Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0051820Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0052120Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0052452Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0052755Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0053073Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0053416Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0053758Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0054007Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0054286Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0054725Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0055193Z return mod(**inputs) 2025-08-14T21:55:50.0055675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0056337Z outputs = self.mobilebert( 2025-08-14T21:55:50.0056976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0057492Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0058123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0058781Z layer_outputs = layer_module( 2025-08-14T21:55:50.0059454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0060139Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0060947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0061681Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0062403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0063101Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0063990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0064709Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0064996Z 2025-08-14T21:55:50.0065177Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0065561Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0066190Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0066768Z return mod(**inputs) 2025-08-14T21:55:50.0067422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0074260Z outputs = self.mobilebert( 2025-08-14T21:55:50.0074951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0075676Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0076352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0076993Z layer_outputs = layer_module( 2025-08-14T21:55:50.0077752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0078456Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0079114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0079856Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0080560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0081334Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0081602Z 2025-08-14T21:55:50.0081745Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0082108Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0082733Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0083142Z return mod(**inputs) 2025-08-14T21:55:50.0083613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0084161Z outputs = self.mobilebert( 2025-08-14T21:55:50.0084651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0085199Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0085697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0086210Z layer_outputs = layer_module( 2025-08-14T21:55:50.0086753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0087402Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0088104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0088890Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0089647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0109783Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0110613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0111570Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0111814Z 2025-08-14T21:55:50.0111923Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0112224Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0112679Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0113090Z return mod(**inputs) 2025-08-14T21:55:50.0113589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0114118Z outputs = self.mobilebert( 2025-08-14T21:55:50.0114624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0115147Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0115741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0116497Z layer_outputs = layer_module( 2025-08-14T21:55:50.0117220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0118020Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0118825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0119525Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0120282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0120959Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0121336Z 2025-08-14T21:55:50.0121470Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0121887Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0122435Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0122962Z return mod(**inputs) 2025-08-14T21:55:50.0123557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0124275Z outputs = self.mobilebert( 2025-08-14T21:55:50.0124904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0125675Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0130422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0130978Z layer_outputs = layer_module( 2025-08-14T21:55:50.0131488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0132034Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0132575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0133146Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0133737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0134361Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0134937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0135470Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0135667Z 2025-08-14T21:55:50.0135768Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0136066Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0136509Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0136913Z return mod(**inputs) 2025-08-14T21:55:50.0137401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0137918Z outputs = self.mobilebert( 2025-08-14T21:55:50.0138408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0138927Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0139437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0139943Z layer_outputs = layer_module( 2025-08-14T21:55:50.0140598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0141148Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0141686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0142243Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0142819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0143376Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0143588Z 2025-08-14T21:55:50.0143700Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0143984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0144432Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0144839Z return mod(**inputs) 2025-08-14T21:55:50.0145314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0145837Z outputs = self.mobilebert( 2025-08-14T21:55:50.0146330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0146847Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0147344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0147881Z layer_outputs = layer_module( 2025-08-14T21:55:50.0148385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0149315Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0149847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0150432Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0151011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0151576Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0152150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0152696Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0152883Z 2025-08-14T21:55:50.0152996Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0153273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0153724Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0154130Z return mod(**inputs) 2025-08-14T21:55:50.0154639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0159309Z outputs = self.mobilebert( 2025-08-14T21:55:50.0159809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0160324Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0160820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0161400Z layer_outputs = layer_module( 2025-08-14T21:55:50.0161902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0162478Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0163103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0163667Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0163874Z 2025-08-14T21:55:50.0163986Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0164266Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0164748Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0165158Z return mod(**inputs) 2025-08-14T21:55:50.0165644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0166158Z outputs = self.mobilebert( 2025-08-14T21:55:50.0166659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0167181Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0167687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0168191Z layer_outputs = layer_module( 2025-08-14T21:55:50.0168688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0169371Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0170061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0170666Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0171240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0171830Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0172014Z 2025-08-14T21:55:50.0172113Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0172400Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0172853Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0173271Z return mod(**inputs) 2025-08-14T21:55:50.0173749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0174266Z outputs = self.mobilebert( 2025-08-14T21:55:50.0174763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0175270Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0175779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0176291Z layer_outputs = layer_module( 2025-08-14T21:55:50.0176789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0177403Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0178030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0178604Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0179172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0179736Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0180302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0180838Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0181023Z 2025-08-14T21:55:50.0181149Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0181427Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0181867Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0182269Z return mod(**inputs) 2025-08-14T21:55:50.0182758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0183272Z outputs = self.mobilebert( 2025-08-14T21:55:50.0183824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0192884Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0193541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0194216Z layer_outputs = layer_module( 2025-08-14T21:55:50.0194888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0195517Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0196133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0196688Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0197239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0197786Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0198351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0201092Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0201298Z 2025-08-14T21:55:50.0201406Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0201657Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0201909Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0202162Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0202402Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0202657Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0202900Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0203150Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0203389Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0203636Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0203917Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0204359Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0204763Z return mod(**inputs) 2025-08-14T21:55:50.0205248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0205750Z outputs = self.mobilebert( 2025-08-14T21:55:50.0206251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0206766Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0207268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0207775Z layer_outputs = layer_module( 2025-08-14T21:55:50.0208270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0208804Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0209321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0209913Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0210483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0211054Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0211647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0212174Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0212366Z 2025-08-14T21:55:50.0212463Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0212798Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0213306Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0213707Z return mod(**inputs) 2025-08-14T21:55:50.0214190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0214696Z outputs = self.mobilebert( 2025-08-14T21:55:50.0215180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0215689Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0216194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0216720Z layer_outputs = layer_module( 2025-08-14T21:55:50.0217222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0217779Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0218308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0218876Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0219426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0219975Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0220181Z 2025-08-14T21:55:50.0220280Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0220564Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0221006Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0221399Z return mod(**inputs) 2025-08-14T21:55:50.0221876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0222387Z outputs = self.mobilebert( 2025-08-14T21:55:50.0222878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0223377Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0223872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0224375Z layer_outputs = layer_module( 2025-08-14T21:55:50.0224867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0225405Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0225941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0226512Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0227100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0231972Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0232545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0233076Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0233261Z 2025-08-14T21:55:50.0233381Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0233672Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0234125Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0234525Z return mod(**inputs) 2025-08-14T21:55:50.0235004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0235518Z outputs = self.mobilebert( 2025-08-14T21:55:50.0236000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0236502Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0237001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0237510Z layer_outputs = layer_module( 2025-08-14T21:55:50.0238011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0238568Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0239101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0239727Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0240277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0240841Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0241152Z 2025-08-14T21:55:50.0241256Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0241543Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0242101Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0242506Z return mod(**inputs) 2025-08-14T21:55:50.0242979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0243496Z outputs = self.mobilebert( 2025-08-14T21:55:50.0243981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0244495Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0244996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0245496Z layer_outputs = layer_module( 2025-08-14T21:55:50.0245989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0246528Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0247059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0247627Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0248204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0249107Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0249748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0250275Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0250467Z 2025-08-14T21:55:50.0250564Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0250851Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0251314Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0251715Z return mod(**inputs) 2025-08-14T21:55:50.0252193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0252713Z outputs = self.mobilebert( 2025-08-14T21:55:50.0253199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0253711Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0254214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0254718Z layer_outputs = layer_module( 2025-08-14T21:55:50.0255210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0255738Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0256322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0261099Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0261647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0262244Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0262454Z 2025-08-14T21:55:50.0262557Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0262842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0263286Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0263687Z return mod(**inputs) 2025-08-14T21:55:50.0264161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0264679Z outputs = self.mobilebert( 2025-08-14T21:55:50.0265174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0265686Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0266181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0266697Z layer_outputs = layer_module( 2025-08-14T21:55:50.0267196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0267736Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0268264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0268841Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0269413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0269980Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0270556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0271208Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0271391Z 2025-08-14T21:55:50.0271494Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0271796Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0272238Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0272638Z return mod(**inputs) 2025-08-14T21:55:50.0273117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0273639Z outputs = self.mobilebert( 2025-08-14T21:55:50.0274137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0274649Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0275140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0275646Z layer_outputs = layer_module( 2025-08-14T21:55:50.0276139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0276709Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0277266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0277816Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0278027Z 2025-08-14T21:55:50.0278133Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0278416Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0278870Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0279265Z return mod(**inputs) 2025-08-14T21:55:50.0279738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0280281Z outputs = self.mobilebert( 2025-08-14T21:55:50.0280769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0281371Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0281862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0282361Z layer_outputs = layer_module( 2025-08-14T21:55:50.0282853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0283475Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0284088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0284656Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0285277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0290037Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0290222Z 2025-08-14T21:55:50.0290318Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0290601Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0291040Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0291449Z return mod(**inputs) 2025-08-14T21:55:50.0291925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0292437Z outputs = self.mobilebert( 2025-08-14T21:55:50.0292930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0293432Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0293966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0294477Z layer_outputs = layer_module( 2025-08-14T21:55:50.0294973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0295607Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0296228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0296801Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0297370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0297936Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0298503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0299036Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0299220Z 2025-08-14T21:55:50.0299325Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0299640Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0300176Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0300606Z return mod(**inputs) 2025-08-14T21:55:50.0301079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0301590Z outputs = self.mobilebert( 2025-08-14T21:55:50.0302110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0302615Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0303113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0303623Z layer_outputs = layer_module( 2025-08-14T21:55:50.0304124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0304736Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0305360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0305918Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0306473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0306989Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0307511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0308040Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0308223Z 2025-08-14T21:55:50.0308329Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0308575Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0308824Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0309066Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0309305Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0309552Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0309792Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0310028Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0310270Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0310520Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0311554Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0312018Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0312421Z return mod(**inputs) 2025-08-14T21:55:50.0312900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0313423Z outputs = self.mobilebert( 2025-08-14T21:55:50.0313916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0318729Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0319248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0319755Z layer_outputs = layer_module( 2025-08-14T21:55:50.0320260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0320792Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0321428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0322002Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0322630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0323205Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0323801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0324368Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0324561Z 2025-08-14T21:55:50.0324658Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0324941Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0325377Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0325788Z return mod(**inputs) 2025-08-14T21:55:50.0326274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0326772Z outputs = self.mobilebert( 2025-08-14T21:55:50.0327256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0327770Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0328277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0328830Z layer_outputs = layer_module( 2025-08-14T21:55:50.0329398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0329933Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0330459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0331015Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0331562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0332124Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0332334Z 2025-08-14T21:55:50.0332432Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0332715Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0333156Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0333556Z return mod(**inputs) 2025-08-14T21:55:50.0334054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0334562Z outputs = self.mobilebert( 2025-08-14T21:55:50.0335053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0335561Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0336095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0336609Z layer_outputs = layer_module( 2025-08-14T21:55:50.0337110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0337645Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0338174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0338745Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0339318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0339881Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0340452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0341031Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0341213Z 2025-08-14T21:55:50.0341319Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0341600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0342066Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0342471Z return mod(**inputs) 2025-08-14T21:55:50.0342943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0352087Z outputs = self.mobilebert( 2025-08-14T21:55:50.0352757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0353442Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0354106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0354798Z layer_outputs = layer_module( 2025-08-14T21:55:50.0355326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0355858Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0356392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0356949Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0357502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0360181Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0360398Z 2025-08-14T21:55:50.0360499Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0360783Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0361300Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0361700Z return mod(**inputs) 2025-08-14T21:55:50.0362179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0362687Z outputs = self.mobilebert( 2025-08-14T21:55:50.0363227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0363762Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0364291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0364796Z layer_outputs = layer_module( 2025-08-14T21:55:50.0365319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0365857Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0366391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0366969Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0367536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0368105Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0368679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0369198Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0369384Z 2025-08-14T21:55:50.0369483Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0369764Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0370237Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0370626Z return mod(**inputs) 2025-08-14T21:55:50.0371106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0371650Z outputs = self.mobilebert( 2025-08-14T21:55:50.0372190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0372773Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0373271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0373777Z layer_outputs = layer_module( 2025-08-14T21:55:50.0374265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0374806Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0375339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0375903Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0376455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0377011Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0377215Z 2025-08-14T21:55:50.0377315Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0377601Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0378037Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0378439Z return mod(**inputs) 2025-08-14T21:55:50.0378913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0379419Z outputs = self.mobilebert( 2025-08-14T21:55:50.0379907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0380420Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0380945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0381445Z layer_outputs = layer_module( 2025-08-14T21:55:50.0381937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0382467Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0383011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0383587Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0384155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0384727Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0385294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0385828Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0386018Z 2025-08-14T21:55:50.0386113Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0386392Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0386878Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0391518Z return mod(**inputs) 2025-08-14T21:55:50.0391994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0392527Z outputs = self.mobilebert( 2025-08-14T21:55:50.0393013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0393550Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0394048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0394552Z layer_outputs = layer_module( 2025-08-14T21:55:50.0395045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0395611Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0396177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0396727Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0396938Z 2025-08-14T21:55:50.0397035Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0397318Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0397758Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0398153Z return mod(**inputs) 2025-08-14T21:55:50.0398628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0399140Z outputs = self.mobilebert( 2025-08-14T21:55:50.0399623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0400135Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0400633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0401230Z layer_outputs = layer_module( 2025-08-14T21:55:50.0401797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0402420Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0403072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0403636Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0404207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0404754Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0404941Z 2025-08-14T21:55:50.0405046Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0405323Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0405764Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0406166Z return mod(**inputs) 2025-08-14T21:55:50.0406638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0407146Z outputs = self.mobilebert( 2025-08-14T21:55:50.0407642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0408153Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0408651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0409158Z layer_outputs = layer_module( 2025-08-14T21:55:50.0409655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0410294Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0410908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0411503Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0412074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0412641Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0413214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0413743Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0413927Z 2025-08-14T21:55:50.0414037Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0414317Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0414756Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0415161Z return mod(**inputs) 2025-08-14T21:55:50.0415680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0420431Z outputs = self.mobilebert( 2025-08-14T21:55:50.0420927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0421439Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0421933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0422445Z layer_outputs = layer_module( 2025-08-14T21:55:50.0422946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0423570Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0424187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0424741Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0425331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0425859Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0426395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0426928Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0427110Z 2025-08-14T21:55:50.0427219Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0427471Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0427710Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0427961Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0428200Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0428437Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0428679Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0428927Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0429161Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0429406Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0429687Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0430174Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0430651Z return mod(**inputs) 2025-08-14T21:55:50.0431130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0431667Z outputs = self.mobilebert( 2025-08-14T21:55:50.0432147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0432684Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0433186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0436502Z layer_outputs = layer_module( 2025-08-14T21:55:50.0437021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0437560Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0438095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0438668Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0439249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0439827Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0440394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0440933Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0441262Z 2025-08-14T21:55:50.0441369Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0441653Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0442094Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0442498Z return mod(**inputs) 2025-08-14T21:55:50.0442975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0443497Z outputs = self.mobilebert( 2025-08-14T21:55:50.0443992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0444507Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0449257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0449778Z layer_outputs = layer_module( 2025-08-14T21:55:50.0450281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0450819Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0451374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0451935Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0452494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0453042Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0453257Z 2025-08-14T21:55:50.0453358Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0453644Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0454088Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0454483Z return mod(**inputs) 2025-08-14T21:55:50.0454970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0455486Z outputs = self.mobilebert( 2025-08-14T21:55:50.0455978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0456489Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0456994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0457532Z layer_outputs = layer_module( 2025-08-14T21:55:50.0458020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0458554Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0459232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0459885Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0460452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0461018Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0461587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0462122Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0462305Z 2025-08-14T21:55:50.0462401Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0462687Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0463127Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0463522Z return mod(**inputs) 2025-08-14T21:55:50.0463997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0464505Z outputs = self.mobilebert( 2025-08-14T21:55:50.0464993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0465494Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0465995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0466505Z layer_outputs = layer_module( 2025-08-14T21:55:50.0467021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0467555Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0468093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0468649Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0469217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0469771Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0469979Z 2025-08-14T21:55:50.0470091Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0470379Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0470815Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0471218Z return mod(**inputs) 2025-08-14T21:55:50.0471695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0472208Z outputs = self.mobilebert( 2025-08-14T21:55:50.0472697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0473211Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0473770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0478889Z layer_outputs = layer_module( 2025-08-14T21:55:50.0479389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0479958Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0480492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0481132Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0481757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0482388Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0482961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0483501Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0483695Z 2025-08-14T21:55:50.0483794Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0484085Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0484524Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0484920Z return mod(**inputs) 2025-08-14T21:55:50.0485397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0485898Z outputs = self.mobilebert( 2025-08-14T21:55:50.0486390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0486905Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0487403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0487899Z layer_outputs = layer_module( 2025-08-14T21:55:50.0488446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0489055Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0489615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0490182Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0490777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0491335Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0491568Z 2025-08-14T21:55:50.0491673Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0491952Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0492398Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0492799Z return mod(**inputs) 2025-08-14T21:55:50.0493269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0493779Z outputs = self.mobilebert( 2025-08-14T21:55:50.0494268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0494776Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0495274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0495779Z layer_outputs = layer_module( 2025-08-14T21:55:50.0496279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0496806Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0497339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0497951Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0498527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0499123Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0499700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0500240Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0500424Z 2025-08-14T21:55:50.0500534Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0500817Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0501262Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0501668Z return mod(**inputs) 2025-08-14T21:55:50.0502141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0502706Z outputs = self.mobilebert( 2025-08-14T21:55:50.0511716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0512399Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0513054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0513742Z layer_outputs = layer_module( 2025-08-14T21:55:50.0514277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0514853Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0515420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0515986Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0516198Z 2025-08-14T21:55:50.0516312Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0516624Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0517075Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0519641Z return mod(**inputs) 2025-08-14T21:55:50.0520146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0520651Z outputs = self.mobilebert( 2025-08-14T21:55:50.0521228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0521740Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0522232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0522746Z layer_outputs = layer_module( 2025-08-14T21:55:50.0523242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0523866Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0524477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0525053Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0525624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0535674Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0535971Z 2025-08-14T21:55:50.0536094Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0536489Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0536945Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0537378Z return mod(**inputs) 2025-08-14T21:55:50.0537892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0538463Z outputs = self.mobilebert( 2025-08-14T21:55:50.0538964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0539484Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0540001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0540504Z layer_outputs = layer_module( 2025-08-14T21:55:50.0541003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0541635Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0542266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0542840Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0543424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0544000Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0544576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0545106Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0545302Z 2025-08-14T21:55:50.0545406Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0545698Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0546236Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0551163Z return mod(**inputs) 2025-08-14T21:55:50.0551658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0552176Z outputs = self.mobilebert( 2025-08-14T21:55:50.0552719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0553246Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0553757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0554272Z layer_outputs = layer_module( 2025-08-14T21:55:50.0554765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0555406Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0556053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0556614Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0557174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0557712Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0558243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0558778Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0558976Z 2025-08-14T21:55:50.0559110Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0559372Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0559629Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0559874Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0560120Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0560412Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0560709Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0561030Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0561370Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0561615Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0561910Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0562166Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0562265Z return mod(**inputs) 2025-08-14T21:55:50.0562627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0562730Z outputs = self.mobilebert( 2025-08-14T21:55:50.0563091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0563186Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0563546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0563635Z layer_outputs = layer_module( 2025-08-14T21:55:50.0563993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0564115Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0564465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0564635Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0564988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0565178Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0565538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0565655Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0565668Z 2025-08-14T21:55:50.0565796Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0565929Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0566179Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0566272Z return mod(**inputs) 2025-08-14T21:55:50.0566631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0566725Z outputs = self.mobilebert( 2025-08-14T21:55:50.0567084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0567180Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0567539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0567627Z layer_outputs = layer_module( 2025-08-14T21:55:50.0567982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0568112Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0568465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0568640Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0568991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0569136Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0569178Z 2025-08-14T21:55:50.0569285Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0569412Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0569672Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0569754Z return mod(**inputs) 2025-08-14T21:55:50.0570104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0570202Z outputs = self.mobilebert( 2025-08-14T21:55:50.0570554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0570646Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0571004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0571096Z layer_outputs = layer_module( 2025-08-14T21:55:50.0571456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0571573Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0571924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0572088Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0572440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0572605Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0572980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0573094Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0573109Z 2025-08-14T21:55:50.0573215Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0573346Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0573629Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0573722Z return mod(**inputs) 2025-08-14T21:55:50.0574080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0574179Z outputs = self.mobilebert( 2025-08-14T21:55:50.0574528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0574622Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0574983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0575091Z layer_outputs = layer_module( 2025-08-14T21:55:50.0579641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0579777Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0580131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0580283Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0580637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0580803Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0580817Z 2025-08-14T21:55:50.0580927Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0581060Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0581321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0581436Z return mod(**inputs) 2025-08-14T21:55:50.0581788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0581887Z outputs = self.mobilebert( 2025-08-14T21:55:50.0582240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0582332Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0582694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0582787Z layer_outputs = layer_module( 2025-08-14T21:55:50.0583146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0583263Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0583614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0583776Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0584126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0584285Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0584634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0584749Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0584762Z 2025-08-14T21:55:50.0584866Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0585018Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0585269Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0585363Z return mod(**inputs) 2025-08-14T21:55:50.0585718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0585834Z outputs = self.mobilebert( 2025-08-14T21:55:50.0586186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0586276Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0586635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0586727Z layer_outputs = layer_module( 2025-08-14T21:55:50.0587090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0587208Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0587557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0587707Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0588059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0588197Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0588217Z 2025-08-14T21:55:50.0588312Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0588440Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0588721Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0588807Z return mod(**inputs) 2025-08-14T21:55:50.0589160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0589278Z outputs = self.mobilebert( 2025-08-14T21:55:50.0589677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0589780Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0590201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0590291Z layer_outputs = layer_module( 2025-08-14T21:55:50.0590647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0590765Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0591117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0591277Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0591632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0591794Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0592146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0592264Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0592277Z 2025-08-14T21:55:50.0592382Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0592507Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0592757Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0592851Z return mod(**inputs) 2025-08-14T21:55:50.0593228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0593322Z outputs = self.mobilebert( 2025-08-14T21:55:50.0593688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0593827Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0594188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0594280Z layer_outputs = layer_module( 2025-08-14T21:55:50.0594627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0594785Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0595138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0595282Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0595294Z 2025-08-14T21:55:50.0595389Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0595513Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0595767Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0595849Z return mod(**inputs) 2025-08-14T21:55:50.0596198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0596293Z outputs = self.mobilebert( 2025-08-14T21:55:50.0596642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0596758Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0597107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0597219Z layer_outputs = layer_module( 2025-08-14T21:55:50.0597573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0597778Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0598132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0598286Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0598634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0598754Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0598767Z 2025-08-14T21:55:50.0598863Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0598987Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0599243Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0599324Z return mod(**inputs) 2025-08-14T21:55:50.0599682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0599770Z outputs = self.mobilebert( 2025-08-14T21:55:50.0600118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0600212Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0600561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0600650Z layer_outputs = layer_module( 2025-08-14T21:55:50.0601024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0601314Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0601672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0601847Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0602195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0602356Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0602703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0602825Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0602837Z 2025-08-14T21:55:50.0602936Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0603066Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0603321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0603404Z return mod(**inputs) 2025-08-14T21:55:50.0603763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0603852Z outputs = self.mobilebert( 2025-08-14T21:55:50.0604258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0604358Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0608965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0609056Z layer_outputs = layer_module( 2025-08-14T21:55:50.0609419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0609650Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0610009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0610149Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0610499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0610617Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0610971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0611090Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0611104Z 2025-08-14T21:55:50.0611202Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0611298Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0611399Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0611491Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0611584Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0611685Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0611779Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0611871Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0611980Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0612074Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0612207Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0612458Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0612540Z return mod(**inputs) 2025-08-14T21:55:50.0612929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0613021Z outputs = self.mobilebert( 2025-08-14T21:55:50.0613373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0613471Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0613843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0613938Z layer_outputs = layer_module( 2025-08-14T21:55:50.0614293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0614402Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0614758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0614917Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0615273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0615426Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0615775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0615893Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0615905Z 2025-08-14T21:55:50.0616002Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0616125Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0616401Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0616482Z return mod(**inputs) 2025-08-14T21:55:50.0616846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0616957Z outputs = self.mobilebert( 2025-08-14T21:55:50.0617307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0617407Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0617760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0617858Z layer_outputs = layer_module( 2025-08-14T21:55:50.0618207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0618328Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0618747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0618889Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0619314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0619460Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0619473Z 2025-08-14T21:55:50.0619570Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0619703Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0619950Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0620029Z return mod(**inputs) 2025-08-14T21:55:50.0620384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0620473Z outputs = self.mobilebert( 2025-08-14T21:55:50.0620852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0620944Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0621291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0621385Z layer_outputs = layer_module( 2025-08-14T21:55:50.0621755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0621876Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0622237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0622392Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0622753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0622902Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0623251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0623374Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0623386Z 2025-08-14T21:55:50.0623482Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0623612Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0623858Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0623937Z return mod(**inputs) 2025-08-14T21:55:50.0624297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0624415Z outputs = self.mobilebert( 2025-08-14T21:55:50.0624766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0624889Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0625238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0625329Z layer_outputs = layer_module( 2025-08-14T21:55:50.0625687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0625805Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0626150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0626300Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0626647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0626787Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0626802Z 2025-08-14T21:55:50.0626898Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0627024Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0627281Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0627366Z return mod(**inputs) 2025-08-14T21:55:50.0627722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0627810Z outputs = self.mobilebert( 2025-08-14T21:55:50.0628158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0628253Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0628625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0628714Z layer_outputs = layer_module( 2025-08-14T21:55:50.0629073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0629186Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0629561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0629714Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0630061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0630221Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0630570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0630693Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0630708Z 2025-08-14T21:55:50.0630803Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0630927Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0631180Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0631261Z return mod(**inputs) 2025-08-14T21:55:50.0631612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0631704Z outputs = self.mobilebert( 2025-08-14T21:55:50.0632053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0632171Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0632519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0632629Z layer_outputs = layer_module( 2025-08-14T21:55:50.0632986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0633130Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0641914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0642077Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0642556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0642724Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0642739Z 2025-08-14T21:55:50.0642847Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0642990Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0643330Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0643415Z return mod(**inputs) 2025-08-14T21:55:50.0643910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0644011Z outputs = self.mobilebert( 2025-08-14T21:55:50.0644495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0644600Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0644989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0645088Z layer_outputs = layer_module( 2025-08-14T21:55:50.0645461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0645584Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0645936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0646089Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0646457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0646620Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0646969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0647092Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0647104Z 2025-08-14T21:55:50.0647203Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0647328Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0647606Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0647709Z return mod(**inputs) 2025-08-14T21:55:50.0650430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0650522Z outputs = self.mobilebert( 2025-08-14T21:55:50.0650884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0650985Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0651335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0651478Z layer_outputs = layer_module( 2025-08-14T21:55:50.0651835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0652034Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0652397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0652537Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0652550Z 2025-08-14T21:55:50.0652648Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0652780Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0653030Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0653119Z return mod(**inputs) 2025-08-14T21:55:50.0653473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0653564Z outputs = self.mobilebert( 2025-08-14T21:55:50.0653920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0654011Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0654361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0654458Z layer_outputs = layer_module( 2025-08-14T21:55:50.0654807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0655015Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0655368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0655524Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0655908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0656030Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0656043Z 2025-08-14T21:55:50.0656145Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0656270Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0656545Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0656636Z return mod(**inputs) 2025-08-14T21:55:50.0656986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0657072Z outputs = self.mobilebert( 2025-08-14T21:55:50.0657430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0657518Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0657876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0657967Z layer_outputs = layer_module( 2025-08-14T21:55:50.0658315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0658519Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0658871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0659029Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0659376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0659546Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0659903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0660036Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0660049Z 2025-08-14T21:55:50.0660148Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0660277Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0660523Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0660609Z return mod(**inputs) 2025-08-14T21:55:50.0660958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0661044Z outputs = self.mobilebert( 2025-08-14T21:55:50.0661402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0661491Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0661842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0661931Z layer_outputs = layer_module( 2025-08-14T21:55:50.0662339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0662619Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0662968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0663105Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0663460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0663567Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0663944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0664056Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0664068Z 2025-08-14T21:55:50.0664162Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0664284Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0664376Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0664472Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0664564Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0664656Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0664753Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0664844Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0664936Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0665033Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0665157Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0665407Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0665498Z return mod(**inputs) 2025-08-14T21:55:50.0665850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0665945Z outputs = self.mobilebert( 2025-08-14T21:55:50.0666294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0666381Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0666736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0666850Z layer_outputs = layer_module( 2025-08-14T21:55:50.0667203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0667319Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0667691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0667849Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0668202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0668356Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0668711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0668826Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0668838Z 2025-08-14T21:55:50.0668937Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0669063Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0669309Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0669399Z return mod(**inputs) 2025-08-14T21:55:50.0669748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0669835Z outputs = self.mobilebert( 2025-08-14T21:55:50.0670190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0670279Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0670635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0670729Z layer_outputs = layer_module( 2025-08-14T21:55:50.0671101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0671229Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0671579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0671724Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0672094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0672230Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0672242Z 2025-08-14T21:55:50.0672342Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0672465Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0672713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0672804Z return mod(**inputs) 2025-08-14T21:55:50.0673158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0673255Z outputs = self.mobilebert( 2025-08-14T21:55:50.0673603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0673696Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0674049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0674134Z layer_outputs = layer_module( 2025-08-14T21:55:50.0674494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0674632Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0674982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0675144Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0675541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0675694Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0676048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0676159Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0676171Z 2025-08-14T21:55:50.0676271Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0676395Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0676699Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0676792Z return mod(**inputs) 2025-08-14T21:55:50.0681357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0681460Z outputs = self.mobilebert( 2025-08-14T21:55:50.0681816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0681908Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0682268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0682355Z layer_outputs = layer_module( 2025-08-14T21:55:50.0682704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0682829Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0684161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0684313Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0684671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0684808Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0684846Z 2025-08-14T21:55:50.0684954Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0685081Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0685342Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0685430Z return mod(**inputs) 2025-08-14T21:55:50.0685785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0685884Z outputs = self.mobilebert( 2025-08-14T21:55:50.0686240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0686337Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0686690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0686778Z layer_outputs = layer_module( 2025-08-14T21:55:50.0687139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0687255Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0687603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0687800Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0688149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0688306Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0688680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0688793Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0688807Z 2025-08-14T21:55:50.0688907Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0689032Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0689280Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0689370Z return mod(**inputs) 2025-08-14T21:55:50.0689724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0689823Z outputs = self.mobilebert( 2025-08-14T21:55:50.0690172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0690262Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0690617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0690705Z layer_outputs = layer_module( 2025-08-14T21:55:50.0691069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0691232Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0691658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0691803Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0692174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0692312Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0692334Z 2025-08-14T21:55:50.0692429Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0692554Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0692827Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0692909Z return mod(**inputs) 2025-08-14T21:55:50.0693259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0693354Z outputs = self.mobilebert( 2025-08-14T21:55:50.0693705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0693809Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0694162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0694256Z layer_outputs = layer_module( 2025-08-14T21:55:50.0694615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0694729Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0695079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0695241Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0695591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0695770Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0696122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0696257Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0696270Z 2025-08-14T21:55:50.0696374Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0696499Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0696753Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0696834Z return mod(**inputs) 2025-08-14T21:55:50.0697187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0697283Z outputs = self.mobilebert( 2025-08-14T21:55:50.0697634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0697726Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0698081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0698168Z layer_outputs = layer_module( 2025-08-14T21:55:50.0698523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0698672Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0699019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0699161Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0699174Z 2025-08-14T21:55:50.0699268Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0699401Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0699649Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0699729Z return mod(**inputs) 2025-08-14T21:55:50.0700112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0700206Z outputs = self.mobilebert( 2025-08-14T21:55:50.0700560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0700677Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0701024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0701120Z layer_outputs = layer_module( 2025-08-14T21:55:50.0701471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0701669Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0702023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0702177Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0702534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0702647Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0702659Z 2025-08-14T21:55:50.0702758Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0702889Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0703135Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0703216Z return mod(**inputs) 2025-08-14T21:55:50.0703592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0703680Z outputs = self.mobilebert( 2025-08-14T21:55:50.0704036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0704145Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0704495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0704589Z layer_outputs = layer_module( 2025-08-14T21:55:50.0704936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0705139Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0705486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0705686Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0710279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0710435Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0710789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0710907Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0710920Z 2025-08-14T21:55:50.0711016Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0711148Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0711398Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0711484Z return mod(**inputs) 2025-08-14T21:55:50.0711839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0711953Z outputs = self.mobilebert( 2025-08-14T21:55:50.0712312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0712406Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0712777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0712874Z layer_outputs = layer_module( 2025-08-14T21:55:50.0713220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0713418Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0713774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0713915Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0714270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0714379Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0714727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0714843Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0714855Z 2025-08-14T21:55:50.0714949Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0715051Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0715142Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0715233Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0715355Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0715449Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0715543Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0715647Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0715759Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0715855Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0715986Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0716234Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0716324Z return mod(**inputs) 2025-08-14T21:55:50.0716672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0716760Z outputs = self.mobilebert( 2025-08-14T21:55:50.0717114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0717205Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0717554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0717651Z layer_outputs = layer_module( 2025-08-14T21:55:50.0717997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0718109Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0718460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0718612Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0718967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0719120Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0719494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0719605Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0719620Z 2025-08-14T21:55:50.0719714Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0719850Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0720179Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0720270Z return mod(**inputs) 2025-08-14T21:55:50.0720699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0720787Z outputs = self.mobilebert( 2025-08-14T21:55:50.0721234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0721329Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0721677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0721775Z layer_outputs = layer_module( 2025-08-14T21:55:50.0722120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0722242Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0722590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0722731Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0723085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0723252Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0723265Z 2025-08-14T21:55:50.0723372Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0723504Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0723757Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0723874Z return mod(**inputs) 2025-08-14T21:55:50.0724227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0724321Z outputs = self.mobilebert( 2025-08-14T21:55:50.0724684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0724777Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0725130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0725220Z layer_outputs = layer_module( 2025-08-14T21:55:50.0725568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0725692Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0726045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0726197Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0726549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0726700Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0727056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0727167Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0727179Z 2025-08-14T21:55:50.0727272Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0727433Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0727679Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0727767Z return mod(**inputs) 2025-08-14T21:55:50.0728114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0728221Z outputs = self.mobilebert( 2025-08-14T21:55:50.0728577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0728665Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0729013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0729107Z layer_outputs = layer_module( 2025-08-14T21:55:50.0729455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0729579Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0729926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0730064Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0730425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0730559Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0730572Z 2025-08-14T21:55:50.0730670Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0730794Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0731063Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0731149Z return mod(**inputs) 2025-08-14T21:55:50.0731497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0731608Z outputs = self.mobilebert( 2025-08-14T21:55:50.0731959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0732049Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0732396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0732483Z layer_outputs = layer_module( 2025-08-14T21:55:50.0732830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0732949Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0733302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0733457Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0733807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0733961Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0734321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0734433Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0734446Z 2025-08-14T21:55:50.0734555Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0734732Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0739207Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0739298Z return mod(**inputs) 2025-08-14T21:55:50.0739681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0739769Z outputs = self.mobilebert( 2025-08-14T21:55:50.0740120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0740228Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0740583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0740671Z layer_outputs = layer_module( 2025-08-14T21:55:50.0741018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0741143Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0741493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0741631Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0741984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0742121Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0742135Z 2025-08-14T21:55:50.0742245Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0742372Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0742615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0742709Z return mod(**inputs) 2025-08-14T21:55:50.0743061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0743181Z outputs = self.mobilebert( 2025-08-14T21:55:50.0743537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0743647Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0744008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0744097Z layer_outputs = layer_module( 2025-08-14T21:55:50.0744445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0744569Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0744917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0745078Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0745425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0745574Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0745928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0746041Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0746055Z 2025-08-14T21:55:50.0746159Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0746284Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0746533Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0746623Z return mod(**inputs) 2025-08-14T21:55:50.0746971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0747063Z outputs = self.mobilebert( 2025-08-14T21:55:50.0747456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0747547Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0747905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0748011Z layer_outputs = layer_module( 2025-08-14T21:55:50.0748362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0748520Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0749246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0749399Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0749413Z 2025-08-14T21:55:50.0749577Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0749713Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0749973Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0750054Z return mod(**inputs) 2025-08-14T21:55:50.0750405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0750502Z outputs = self.mobilebert( 2025-08-14T21:55:50.0750850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0750951Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0751299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0751443Z layer_outputs = layer_module( 2025-08-14T21:55:50.0751799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0752029Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0752387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0752537Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0752883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0753000Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0753012Z 2025-08-14T21:55:50.0753107Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0753235Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0753483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0753566Z return mod(**inputs) 2025-08-14T21:55:50.0753920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0754007Z outputs = self.mobilebert( 2025-08-14T21:55:50.0754355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0754452Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0754798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0754893Z layer_outputs = layer_module( 2025-08-14T21:55:50.0755240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0755437Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0755846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0755999Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0756369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0756525Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0756873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0756991Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0757003Z 2025-08-14T21:55:50.0757100Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0757223Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0757479Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0757558Z return mod(**inputs) 2025-08-14T21:55:50.0757911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0757997Z outputs = self.mobilebert( 2025-08-14T21:55:50.0758345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0758442Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0758791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0758880Z layer_outputs = layer_module( 2025-08-14T21:55:50.0759266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0759469Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0759830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0759989Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0760339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0760449Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0760798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0760911Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0760925Z 2025-08-14T21:55:50.0761022Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0761189Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0761293Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0761387Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0761480Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0761578Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0761672Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0761776Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0761867Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0761960Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0762095Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0762341Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0762421Z return mod(**inputs) 2025-08-14T21:55:50.0762775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0762865Z outputs = self.mobilebert( 2025-08-14T21:55:50.0763241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0763348Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0763757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0763855Z layer_outputs = layer_module( 2025-08-14T21:55:50.0768406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0768515Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0768866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0769026Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0769389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0769543Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0769895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0770013Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0770026Z 2025-08-14T21:55:50.0770122Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0770261Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0770507Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0770589Z return mod(**inputs) 2025-08-14T21:55:50.0770952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0771070Z outputs = self.mobilebert( 2025-08-14T21:55:50.0771422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0771544Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0771939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0772038Z layer_outputs = layer_module( 2025-08-14T21:55:50.0772386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0772503Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0772861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0772999Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0773354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0773490Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0773505Z 2025-08-14T21:55:50.0773598Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0773729Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0773976Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0774056Z return mod(**inputs) 2025-08-14T21:55:50.0774408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0774496Z outputs = self.mobilebert( 2025-08-14T21:55:50.0774853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0774942Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0775317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0775411Z layer_outputs = layer_module( 2025-08-14T21:55:50.0775760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0775874Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0776249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0776402Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0776754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0776904Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0777251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0777367Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0777381Z 2025-08-14T21:55:50.0777476Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0777605Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0777853Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0777933Z return mod(**inputs) 2025-08-14T21:55:50.0778342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0778430Z outputs = self.mobilebert( 2025-08-14T21:55:50.0778847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0778966Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0779318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0779430Z layer_outputs = layer_module( 2025-08-14T21:55:50.0779776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0779889Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0780243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0780379Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0780733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0780872Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0780884Z 2025-08-14T21:55:50.0780979Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0781110Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0781356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0781437Z return mod(**inputs) 2025-08-14T21:55:50.0781789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0781877Z outputs = self.mobilebert( 2025-08-14T21:55:50.0782230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0782319Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0782666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0782762Z layer_outputs = layer_module( 2025-08-14T21:55:50.0783130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0783251Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0783601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0783754Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0784134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0784284Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0784638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0784748Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0784761Z 2025-08-14T21:55:50.0784855Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0784988Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0785235Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0785315Z return mod(**inputs) 2025-08-14T21:55:50.0785670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0785758Z outputs = self.mobilebert( 2025-08-14T21:55:50.0786114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0786203Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0786548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0786666Z layer_outputs = layer_module( 2025-08-14T21:55:50.0787015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0787163Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0787515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0787650Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0788004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0788137Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0788150Z 2025-08-14T21:55:50.0788246Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0788379Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0788627Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0788714Z return mod(**inputs) 2025-08-14T21:55:50.0789065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0789155Z outputs = self.mobilebert( 2025-08-14T21:55:50.0789508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0789597Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0789946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0790039Z layer_outputs = layer_module( 2025-08-14T21:55:50.0790386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0790510Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0790883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0791036Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0791394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0791563Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0791920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0792029Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0792041Z 2025-08-14T21:55:50.0792135Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0792269Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0792515Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0792622Z return mod(**inputs) 2025-08-14T21:55:50.0801269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0801371Z outputs = self.mobilebert( 2025-08-14T21:55:50.0801858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0801957Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0802431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0802540Z layer_outputs = layer_module( 2025-08-14T21:55:50.0803023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0803240Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0803728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0803911Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0803923Z 2025-08-14T21:55:50.0804029Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0804154Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0804402Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0804492Z return mod(**inputs) 2025-08-14T21:55:50.0804842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0804935Z outputs = self.mobilebert( 2025-08-14T21:55:50.0805284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0805380Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0805736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0805823Z layer_outputs = layer_module( 2025-08-14T21:55:50.0806175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0806373Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0806723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0806883Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0807286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0807399Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0807418Z 2025-08-14T21:55:50.0809812Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0809943Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0810196Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0810282Z return mod(**inputs) 2025-08-14T21:55:50.0810663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0810757Z outputs = self.mobilebert( 2025-08-14T21:55:50.0811103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0811197Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0811545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0811634Z layer_outputs = layer_module( 2025-08-14T21:55:50.0811994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0812194Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0812543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0812702Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0813052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0813208Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0813632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0813742Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0813757Z 2025-08-14T21:55:50.0813861Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0814014Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0814272Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0814354Z return mod(**inputs) 2025-08-14T21:55:50.0814705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0814802Z outputs = self.mobilebert( 2025-08-14T21:55:50.0815149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0815240Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0815601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0815686Z layer_outputs = layer_module( 2025-08-14T21:55:50.0816043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0816245Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0816597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0816743Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0817093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0817207Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0817557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0817672Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0817706Z 2025-08-14T21:55:50.0817812Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0817910Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0818007Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0818107Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0818200Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0818319Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0818415Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0818510Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0818617Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0818710Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0818841Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0819100Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0819181Z return mod(**inputs) 2025-08-14T21:55:50.0819538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0819634Z outputs = self.mobilebert( 2025-08-14T21:55:50.0819982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0820077Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0820431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0820520Z layer_outputs = layer_module( 2025-08-14T21:55:50.0820878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0821010Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0821370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0821525Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0822031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0822202Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0822552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0822672Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0822684Z 2025-08-14T21:55:50.0822777Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0822901Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0823158Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0823240Z return mod(**inputs) 2025-08-14T21:55:50.0823589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0823684Z outputs = self.mobilebert( 2025-08-14T21:55:50.0824030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0824126Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0824473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0824561Z layer_outputs = layer_module( 2025-08-14T21:55:50.0824921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0825040Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0825434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0825574Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0825926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0826071Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0826084Z 2025-08-14T21:55:50.0826201Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0826326Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0826582Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0826664Z return mod(**inputs) 2025-08-14T21:55:50.0827019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0827108Z outputs = self.mobilebert( 2025-08-14T21:55:50.0827458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0827555Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0827900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0827987Z layer_outputs = layer_module( 2025-08-14T21:55:50.0828342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0828460Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0828813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0828997Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0829346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0829506Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0829883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0829999Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0830013Z 2025-08-14T21:55:50.0830108Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0830232Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0830482Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0830562Z return mod(**inputs) 2025-08-14T21:55:50.0830915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0831005Z outputs = self.mobilebert( 2025-08-14T21:55:50.0831354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0831452Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0831800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0831886Z layer_outputs = layer_module( 2025-08-14T21:55:50.0832239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0832357Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0832711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0832849Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0833215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0833359Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0833373Z 2025-08-14T21:55:50.0833468Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0833598Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0833866Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0833951Z return mod(**inputs) 2025-08-14T21:55:50.0834308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0834396Z outputs = self.mobilebert( 2025-08-14T21:55:50.0834748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0834849Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0835203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0835302Z layer_outputs = layer_module( 2025-08-14T21:55:50.0835650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0835766Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0836160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0836323Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0840916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0841166Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0841522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0841666Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0841679Z 2025-08-14T21:55:50.0841776Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0841902Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0842156Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0842239Z return mod(**inputs) 2025-08-14T21:55:50.0842593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0842680Z outputs = self.mobilebert( 2025-08-14T21:55:50.0843027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0843125Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0843476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0843570Z layer_outputs = layer_module( 2025-08-14T21:55:50.0843927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0844043Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0844403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0844541Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0844889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0845035Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0845047Z 2025-08-14T21:55:50.0845145Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0845298Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0845547Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0845632Z return mod(**inputs) 2025-08-14T21:55:50.0846015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0846104Z outputs = self.mobilebert( 2025-08-14T21:55:50.0846453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0846553Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0846898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0846994Z layer_outputs = layer_module( 2025-08-14T21:55:50.0847344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0847463Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0847827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0847979Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0848337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0848486Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0849163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0849344Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0849357Z 2025-08-14T21:55:50.0849455Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0849589Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0849865Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0849951Z return mod(**inputs) 2025-08-14T21:55:50.0850312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0850402Z outputs = self.mobilebert( 2025-08-14T21:55:50.0850806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0850903Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0851328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0851423Z layer_outputs = layer_module( 2025-08-14T21:55:50.0851771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0851918Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0852271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0852406Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0852419Z 2025-08-14T21:55:50.0852518Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0852643Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0852890Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0852977Z return mod(**inputs) 2025-08-14T21:55:50.0853332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0853423Z outputs = self.mobilebert( 2025-08-14T21:55:50.0853806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0853896Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0854250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0854371Z layer_outputs = layer_module( 2025-08-14T21:55:50.0854722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0854927Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0855278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0855434Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0855791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0855903Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0855916Z 2025-08-14T21:55:50.0856018Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0856143Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0856393Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0856478Z return mod(**inputs) 2025-08-14T21:55:50.0856825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0856918Z outputs = self.mobilebert( 2025-08-14T21:55:50.0857291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0857380Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0857740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0857854Z layer_outputs = layer_module( 2025-08-14T21:55:50.0858204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0858408Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0858759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0858915Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0859266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0859417Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0859776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0859891Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0859904Z 2025-08-14T21:55:50.0860005Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0860132Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0860377Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0860465Z return mod(**inputs) 2025-08-14T21:55:50.0860813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0860904Z outputs = self.mobilebert( 2025-08-14T21:55:50.0861256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0861368Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0861722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0861810Z layer_outputs = layer_module( 2025-08-14T21:55:50.0862193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0862401Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0862757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0862898Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0863255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0863363Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0863721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0863834Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0863847Z 2025-08-14T21:55:50.0863948Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0864042Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0864138Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0864240Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0864334Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0864431Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0864532Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0864652Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0864746Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0864847Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0864974Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0865285Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0865393Z return mod(**inputs) 2025-08-14T21:55:50.0869936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0870041Z outputs = self.mobilebert( 2025-08-14T21:55:50.0870395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0870486Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0870848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0870946Z layer_outputs = layer_module( 2025-08-14T21:55:50.0871305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0871414Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0871773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0871935Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0872284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0872448Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0872807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0872919Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0872931Z 2025-08-14T21:55:50.0873035Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0873186Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0873434Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0873523Z return mod(**inputs) 2025-08-14T21:55:50.0873873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0873985Z outputs = self.mobilebert( 2025-08-14T21:55:50.0874333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0874421Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0874777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0874867Z layer_outputs = layer_module( 2025-08-14T21:55:50.0875222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0875335Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0875684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0875827Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0876175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0876309Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0876330Z 2025-08-14T21:55:50.0876425Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0876548Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0876820Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0876902Z return mod(**inputs) 2025-08-14T21:55:50.0877258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0877372Z outputs = self.mobilebert( 2025-08-14T21:55:50.0877721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0877817Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0878164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0878251Z layer_outputs = layer_module( 2025-08-14T21:55:50.0878603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0878724Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0879072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0879234Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0879617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0879797Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0880215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0880327Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0880339Z 2025-08-14T21:55:50.0880440Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0880564Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0880817Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0880901Z return mod(**inputs) 2025-08-14T21:55:50.0881378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0881475Z outputs = self.mobilebert( 2025-08-14T21:55:50.0881824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0881932Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0882287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0882375Z layer_outputs = layer_module( 2025-08-14T21:55:50.0882736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0882858Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0883207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0883354Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0883704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0883854Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0883868Z 2025-08-14T21:55:50.0883962Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0884088Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0884342Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0884422Z return mod(**inputs) 2025-08-14T21:55:50.0884772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0884894Z outputs = self.mobilebert( 2025-08-14T21:55:50.0885247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0885364Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0885713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0885803Z layer_outputs = layer_module( 2025-08-14T21:55:50.0886155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0886271Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0886619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0886780Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0887133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0887292Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0887636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0887748Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0887768Z 2025-08-14T21:55:50.0887863Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0887988Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0888240Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0888326Z return mod(**inputs) 2025-08-14T21:55:50.0888674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0888773Z outputs = self.mobilebert( 2025-08-14T21:55:50.0889145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0889240Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0889599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0889714Z layer_outputs = layer_module( 2025-08-14T21:55:50.0890075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0890192Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0890539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0890686Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0891036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0891183Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0891195Z 2025-08-14T21:55:50.0891295Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0891421Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0891680Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0891760Z return mod(**inputs) 2025-08-14T21:55:50.0892109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0892202Z outputs = self.mobilebert( 2025-08-14T21:55:50.0892550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0892674Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0893022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0893150Z layer_outputs = layer_module( 2025-08-14T21:55:50.0893512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0893628Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0893981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0894183Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0898772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0898934Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0899284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0899404Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0899417Z 2025-08-14T21:55:50.0899511Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0905851Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0906165Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0906249Z return mod(**inputs) 2025-08-14T21:55:50.0906617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0906722Z outputs = self.mobilebert( 2025-08-14T21:55:50.0907076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0907185Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0907626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0907723Z layer_outputs = layer_module( 2025-08-14T21:55:50.0908092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0908271Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0908673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0908847Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0908861Z 2025-08-14T21:55:50.0909067Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0909211Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0909467Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0909552Z return mod(**inputs) 2025-08-14T21:55:50.0909914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0910008Z outputs = self.mobilebert( 2025-08-14T21:55:50.0910367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0910458Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0910807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0910909Z layer_outputs = layer_module( 2025-08-14T21:55:50.0911263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0911505Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0911859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0912039Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0912397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0912515Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0912529Z 2025-08-14T21:55:50.0912637Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0912769Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0913018Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0913110Z return mod(**inputs) 2025-08-14T21:55:50.0913462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0913554Z outputs = self.mobilebert( 2025-08-14T21:55:50.0913913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0914005Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0914367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0914456Z layer_outputs = layer_module( 2025-08-14T21:55:50.0914805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0915011Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0915362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0915518Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0915896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0916049Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0916428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0916541Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0916553Z 2025-08-14T21:55:50.0916651Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0916786Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0917037Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0917132Z return mod(**inputs) 2025-08-14T21:55:50.0917483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0917573Z outputs = self.mobilebert( 2025-08-14T21:55:50.0917931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0918021Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0918381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0918476Z layer_outputs = layer_module( 2025-08-14T21:55:50.0918829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0919046Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0919418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0919557Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0919920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0920051Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0920411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0920522Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0920534Z 2025-08-14T21:55:50.0920633Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0920735Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0920828Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0920922Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0921025Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0921207Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0921311Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0921405Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0921499Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0921599Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0921728Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0921981Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0922072Z return mod(**inputs) 2025-08-14T21:55:50.0922425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0922522Z outputs = self.mobilebert( 2025-08-14T21:55:50.0922882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0922977Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0923427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0931913Z layer_outputs = layer_module( 2025-08-14T21:55:50.0932389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0932519Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0933024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0933223Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0933707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0933897Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0934390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0934519Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0934537Z 2025-08-14T21:55:50.0934648Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0934791Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0935040Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0935132Z return mod(**inputs) 2025-08-14T21:55:50.0935485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0935578Z outputs = self.mobilebert( 2025-08-14T21:55:50.0935944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0936055Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0936418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0936532Z layer_outputs = layer_module( 2025-08-14T21:55:50.0936884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0937010Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0937363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0937515Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0937910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0938122Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0938135Z 2025-08-14T21:55:50.0938242Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0938374Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0938628Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0938724Z return mod(**inputs) 2025-08-14T21:55:50.0939075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0939176Z outputs = self.mobilebert( 2025-08-14T21:55:50.0939527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0939621Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0940029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0940124Z layer_outputs = layer_module( 2025-08-14T21:55:50.0940499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0940632Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0940985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0941151Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0941520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0941673Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0942034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0942150Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0942162Z 2025-08-14T21:55:50.0942267Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0942396Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0942644Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0942734Z return mod(**inputs) 2025-08-14T21:55:50.0943085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0943187Z outputs = self.mobilebert( 2025-08-14T21:55:50.0943535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0943625Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0943983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0944093Z layer_outputs = layer_module( 2025-08-14T21:55:50.0944445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0944601Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0944949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0945093Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0945443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0945579Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0945592Z 2025-08-14T21:55:50.0945695Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0945822Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0946079Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0946161Z return mod(**inputs) 2025-08-14T21:55:50.0946512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0946610Z outputs = self.mobilebert( 2025-08-14T21:55:50.0946958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0947048Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0947406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0947493Z layer_outputs = layer_module( 2025-08-14T21:55:50.0947849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0947970Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0948342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0948508Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0949374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0949601Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0949953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0950070Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0950082Z 2025-08-14T21:55:50.0950187Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0950313Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0950562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0950650Z return mod(**inputs) 2025-08-14T21:55:50.0951000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0951102Z outputs = self.mobilebert( 2025-08-14T21:55:50.0951450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0951541Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0951898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0951987Z layer_outputs = layer_module( 2025-08-14T21:55:50.0952403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0952636Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0952988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0953166Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0953516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0953654Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0953667Z 2025-08-14T21:55:50.0953772Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0953901Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0954158Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0954243Z return mod(**inputs) 2025-08-14T21:55:50.0954592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0954695Z outputs = self.mobilebert( 2025-08-14T21:55:50.0955044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0955137Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0955499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0955591Z layer_outputs = layer_module( 2025-08-14T21:55:50.0955953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0956074Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0956424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0956588Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0956965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0957127Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0957477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0957614Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0957627Z 2025-08-14T21:55:50.0957738Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0957867Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0958124Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0958208Z return mod(**inputs) 2025-08-14T21:55:50.0958563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0958659Z outputs = self.mobilebert( 2025-08-14T21:55:50.0959011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0959103Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0959463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0959555Z layer_outputs = layer_module( 2025-08-14T21:55:50.0959911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.0960061Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.0960414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0960581Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0960594Z 2025-08-14T21:55:50.0960690Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0960827Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0961179Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0961263Z return mod(**inputs) 2025-08-14T21:55:50.0961622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0961708Z outputs = self.mobilebert( 2025-08-14T21:55:50.0962061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0962162Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0962516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0962621Z layer_outputs = layer_module( 2025-08-14T21:55:50.0962977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0963181Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0963542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.0963697Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.0964060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0964175Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0964187Z 2025-08-14T21:55:50.0964286Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0964424Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0964672Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0964779Z return mod(**inputs) 2025-08-14T21:55:50.0965139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0965227Z outputs = self.mobilebert( 2025-08-14T21:55:50.0965607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0965698Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0966045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0966139Z layer_outputs = layer_module( 2025-08-14T21:55:50.0966486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.0966748Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.0973333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.0973491Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.0973851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.0974001Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0974349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0974468Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0974481Z 2025-08-14T21:55:50.0974605Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0974739Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0974986Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0975091Z return mod(**inputs) 2025-08-14T21:55:50.0975448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0975539Z outputs = self.mobilebert( 2025-08-14T21:55:50.0975895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0975985Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0976335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0976431Z layer_outputs = layer_module( 2025-08-14T21:55:50.0976785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.0976989Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.0977345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.0977478Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.0977833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.0977939Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.0978285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0978401Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0978415Z 2025-08-14T21:55:50.0978512Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0978611Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0978705Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0978818Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0978923Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0979015Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0979107Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0979207Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0979298Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0979413Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0979547Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0979792Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0979886Z return mod(**inputs) 2025-08-14T21:55:50.0980235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0980326Z outputs = self.mobilebert( 2025-08-14T21:55:50.0980685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0980776Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0981174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0981271Z layer_outputs = layer_module( 2025-08-14T21:55:50.0981692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.0981809Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.0982160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.0982348Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.0982703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.0982855Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0983231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0983340Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0983353Z 2025-08-14T21:55:50.0983450Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0983583Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0983829Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0983913Z return mod(**inputs) 2025-08-14T21:55:50.0984277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0984368Z outputs = self.mobilebert( 2025-08-14T21:55:50.0984728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0984824Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0985170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0985267Z layer_outputs = layer_module( 2025-08-14T21:55:50.0985617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0985744Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0986090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0986232Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0986606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0986743Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0986757Z 2025-08-14T21:55:50.0986858Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0986987Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0987253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0987343Z return mod(**inputs) 2025-08-14T21:55:50.0987691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0987780Z outputs = self.mobilebert( 2025-08-14T21:55:50.0988138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0988230Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0988592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0988681Z layer_outputs = layer_module( 2025-08-14T21:55:50.0989029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0989154Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0989504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.0989659Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.0990015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.0990184Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.0990541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.0990684Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.0990696Z 2025-08-14T21:55:50.0990790Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0990923Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0991169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0991256Z return mod(**inputs) 2025-08-14T21:55:50.0991606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0991693Z outputs = self.mobilebert( 2025-08-14T21:55:50.0992046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0992136Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.0992489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.0992587Z layer_outputs = layer_module( 2025-08-14T21:55:50.0992938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.0993063Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.0993411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.0993547Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.0993899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.0994038Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.0994051Z 2025-08-14T21:55:50.0994155Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.0994300Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.0994549Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.0994640Z return mod(**inputs) 2025-08-14T21:55:50.0995014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.0995101Z outputs = self.mobilebert( 2025-08-14T21:55:50.0995473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.0995574Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1000190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1000302Z layer_outputs = layer_module( 2025-08-14T21:55:50.1000660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1000778Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1001216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1001375Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1001740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1001892Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1002240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1002387Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1002400Z 2025-08-14T21:55:50.1002498Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1002635Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1002904Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1002986Z return mod(**inputs) 2025-08-14T21:55:50.1003347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1003436Z outputs = self.mobilebert( 2025-08-14T21:55:50.1003785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1003884Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1004229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1004324Z layer_outputs = layer_module( 2025-08-14T21:55:50.1004671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1004786Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1005142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1005279Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1005635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1005768Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1005781Z 2025-08-14T21:55:50.1005879Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1006009Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1006255Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1006335Z return mod(**inputs) 2025-08-14T21:55:50.1006712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1006802Z outputs = self.mobilebert( 2025-08-14T21:55:50.1007153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1007262Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1007613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1007709Z layer_outputs = layer_module( 2025-08-14T21:55:50.1008056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1008177Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1008527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1008679Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1009030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1009180Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1009529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1009644Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1009656Z 2025-08-14T21:55:50.1009750Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1009903Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1010199Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1010280Z return mod(**inputs) 2025-08-14T21:55:50.1010707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1010819Z outputs = self.mobilebert( 2025-08-14T21:55:50.1011174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1011265Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1011611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1011703Z layer_outputs = layer_module( 2025-08-14T21:55:50.1012055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.1012204Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.1012559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1012695Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1012707Z 2025-08-14T21:55:50.1012809Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1012933Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1013183Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1013274Z return mod(**inputs) 2025-08-14T21:55:50.1013623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1013718Z outputs = self.mobilebert( 2025-08-14T21:55:50.1014066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1014159Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1014536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1014627Z layer_outputs = layer_module( 2025-08-14T21:55:50.1014977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1015211Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1015561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.1015717Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.1016065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1016177Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1016189Z 2025-08-14T21:55:50.1016295Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1016422Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1016676Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1016756Z return mod(**inputs) 2025-08-14T21:55:50.1017107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1017199Z outputs = self.mobilebert( 2025-08-14T21:55:50.1017550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1017642Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1018021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1018107Z layer_outputs = layer_module( 2025-08-14T21:55:50.1018467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1018683Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1019034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.1019191Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.1019540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.1019698Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1020046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1020156Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1020170Z 2025-08-14T21:55:50.1020272Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1020397Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1020648Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1020729Z return mod(**inputs) 2025-08-14T21:55:50.1021076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1021170Z outputs = self.mobilebert( 2025-08-14T21:55:50.1021515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1021605Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1021959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1022066Z layer_outputs = layer_module( 2025-08-14T21:55:50.1022423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.1022625Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.1022993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.1023142Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.1023488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.1023604Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.1023952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1024061Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1024074Z 2025-08-14T21:55:50.1024177Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1024270Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1024361Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1024459Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1024559Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1024694Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1024792Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1024883Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1029220Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1029321Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1029446Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1029728Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1029811Z return mod(**inputs) 2025-08-14T21:55:50.1030163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1030281Z outputs = self.mobilebert( 2025-08-14T21:55:50.1030629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1030726Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1031077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1031166Z layer_outputs = layer_module( 2025-08-14T21:55:50.1031523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.1031633Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.1031992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.1032143Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.1032493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.1032653Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1033003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1033114Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1033133Z 2025-08-14T21:55:50.1033226Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1033351Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1033605Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1033685Z return mod(**inputs) 2025-08-14T21:55:50.1034053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1034154Z outputs = self.mobilebert( 2025-08-14T21:55:50.1034502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1034617Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1034969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1035058Z layer_outputs = layer_module( 2025-08-14T21:55:50.1035412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1035530Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1035878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1036018Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1036368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1036510Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1036524Z 2025-08-14T21:55:50.1036619Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1036743Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1037000Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1037081Z return mod(**inputs) 2025-08-14T21:55:50.1037433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1037548Z outputs = self.mobilebert( 2025-08-14T21:55:50.1037899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1038016Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1038362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1038450Z layer_outputs = layer_module( 2025-08-14T21:55:50.1038806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1038921Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1039332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1039555Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1039908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1040065Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1040413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1040531Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1040545Z 2025-08-14T21:55:50.1040639Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1040767Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1041017Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1041163Z return mod(**inputs) 2025-08-14T21:55:50.1041516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1041615Z outputs = self.mobilebert( 2025-08-14T21:55:50.1041998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1042096Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1042444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1042976Z layer_outputs = layer_module( 2025-08-14T21:55:50.1043333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1043450Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1043799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1043951Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1044303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1044452Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1044464Z 2025-08-14T21:55:50.1044562Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1044688Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1044951Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1045033Z return mod(**inputs) 2025-08-14T21:55:50.1045390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1045480Z outputs = self.mobilebert( 2025-08-14T21:55:50.1045826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1045961Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1046309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1046418Z layer_outputs = layer_module( 2025-08-14T21:55:50.1046769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1046887Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1047243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1047394Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1047740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1047895Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1048244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1048361Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1048373Z 2025-08-14T21:55:50.1048467Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1048595Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1049208Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1049291Z return mod(**inputs) 2025-08-14T21:55:50.1049640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1049734Z outputs = self.mobilebert( 2025-08-14T21:55:50.1050079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1050174Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1050580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1050669Z layer_outputs = layer_module( 2025-08-14T21:55:50.1051021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1051160Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1051515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1051648Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1051997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1052145Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1052158Z 2025-08-14T21:55:50.1052252Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1052387Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1052634Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1052714Z return mod(**inputs) 2025-08-14T21:55:50.1053073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1053161Z outputs = self.mobilebert( 2025-08-14T21:55:50.1053511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1053643Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1058179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1058329Z layer_outputs = layer_module( 2025-08-14T21:55:50.1058682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1058825Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1059180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1059335Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1059682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1059837Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1060192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1060314Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1060327Z 2025-08-14T21:55:50.1060421Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1060546Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1060801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1060885Z return mod(**inputs) 2025-08-14T21:55:50.1061244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1061338Z outputs = self.mobilebert( 2025-08-14T21:55:50.1061687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1061783Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1062136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1062225Z layer_outputs = layer_module( 2025-08-14T21:55:50.1062602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.1062753Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.1063109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1063269Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1063282Z 2025-08-14T21:55:50.1063381Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1063514Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1063763Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1063851Z return mod(**inputs) 2025-08-14T21:55:50.1064202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1064291Z outputs = self.mobilebert( 2025-08-14T21:55:50.1064649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1064739Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1065088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1065187Z layer_outputs = layer_module( 2025-08-14T21:55:50.1065536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1065744Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1066094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.1066269Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.1066626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1066757Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1066770Z 2025-08-14T21:55:50.1066872Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1066998Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1067247Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1067334Z return mod(**inputs) 2025-08-14T21:55:50.1067686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1067773Z outputs = self.mobilebert( 2025-08-14T21:55:50.1068185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1068276Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1068705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1068798Z layer_outputs = layer_module( 2025-08-14T21:55:50.1069149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1069352Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1069703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.1069863Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.1070212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.1070363Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1070738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1070851Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1070864Z 2025-08-14T21:55:50.1070965Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1071114Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1071359Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1071446Z return mod(**inputs) 2025-08-14T21:55:50.1071798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1071886Z outputs = self.mobilebert( 2025-08-14T21:55:50.1072245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1072340Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1072698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1072787Z layer_outputs = layer_module( 2025-08-14T21:55:50.1073140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.1073346Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.1073696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.1073835Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.1074207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.1074316Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.1074678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1074811Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1074823Z 2025-08-14T21:55:50.1074917Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1075023Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1075121Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1075218Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1075310Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1075400Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1075497Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1075590Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1075680Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1075776Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1075903Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1076156Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1076244Z return mod(**inputs) 2025-08-14T21:55:50.1076595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1076692Z outputs = self.mobilebert( 2025-08-14T21:55:50.1077044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1077132Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1077488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1077578Z layer_outputs = layer_module( 2025-08-14T21:55:50.1077954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.1078064Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.1078414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.1078573Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.1078940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.1079094Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1079448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1079563Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1079576Z 2025-08-14T21:55:50.1079676Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1079803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1080053Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1080143Z return mod(**inputs) 2025-08-14T21:55:50.1080494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1080592Z outputs = self.mobilebert( 2025-08-14T21:55:50.1080942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1081104Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1081487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1081598Z layer_outputs = layer_module( 2025-08-14T21:55:50.1081949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1082103Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1082452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1082631Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1091388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1091550Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1091564Z 2025-08-14T21:55:50.1091682Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1091824Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1092163Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1092248Z return mod(**inputs) 2025-08-14T21:55:50.1092732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1092840Z outputs = self.mobilebert( 2025-08-14T21:55:50.1093189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1093281Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1093640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1093729Z layer_outputs = layer_module( 2025-08-14T21:55:50.1094092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1094210Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1094589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1094753Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1095105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1095291Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1095642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1095759Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1095771Z 2025-08-14T21:55:50.1095873Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1096004Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1096252Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1096340Z return mod(**inputs) 2025-08-14T21:55:50.1096690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1096788Z outputs = self.mobilebert( 2025-08-14T21:55:50.1097185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1097277Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1099775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1099865Z layer_outputs = layer_module( 2025-08-14T21:55:50.1100216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1100366Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1100718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1100884Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1101229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1101365Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1101380Z 2025-08-14T21:55:50.1101483Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1101608Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1101859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1101941Z return mod(**inputs) 2025-08-14T21:55:50.1102290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1102386Z outputs = self.mobilebert( 2025-08-14T21:55:50.1102739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1102832Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1103195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1103287Z layer_outputs = layer_module( 2025-08-14T21:55:50.1103643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1103757Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1104103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1104262Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1104629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1104787Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1105136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1105270Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1105284Z 2025-08-14T21:55:50.1105386Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1105511Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1105763Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1105843Z return mod(**inputs) 2025-08-14T21:55:50.1106193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1106286Z outputs = self.mobilebert( 2025-08-14T21:55:50.1106636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1106725Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1107078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1107167Z layer_outputs = layer_module( 2025-08-14T21:55:50.1107521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1107637Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1107981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1108149Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1108499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1108661Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1108673Z 2025-08-14T21:55:50.1108772Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1108901Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1109161Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1109244Z return mod(**inputs) 2025-08-14T21:55:50.1109594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1109693Z outputs = self.mobilebert( 2025-08-14T21:55:50.1110039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1110136Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1110488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1110578Z layer_outputs = layer_module( 2025-08-14T21:55:50.1110934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1111053Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1111404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1111573Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1112038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1112200Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1112575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1112691Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1112703Z 2025-08-14T21:55:50.1112810Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1112941Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1113213Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1113298Z return mod(**inputs) 2025-08-14T21:55:50.1113648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1113747Z outputs = self.mobilebert( 2025-08-14T21:55:50.1114094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1114187Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1114545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1114636Z layer_outputs = layer_module( 2025-08-14T21:55:50.1114994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.1115144Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.1115496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1115639Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1115651Z 2025-08-14T21:55:50.1115751Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1115906Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1116156Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1116238Z return mod(**inputs) 2025-08-14T21:55:50.1116596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1116708Z outputs = self.mobilebert( 2025-08-14T21:55:50.1117061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1117158Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1117503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1117597Z layer_outputs = layer_module( 2025-08-14T21:55:50.1117946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1118150Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1118506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.1118657Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.1119014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1119125Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1119137Z 2025-08-14T21:55:50.1119235Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1119365Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1119613Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1119698Z return mod(**inputs) 2025-08-14T21:55:50.1120055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1120163Z outputs = self.mobilebert( 2025-08-14T21:55:50.1120523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1120614Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1120989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1121176Z layer_outputs = layer_module( 2025-08-14T21:55:50.1121526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1121730Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1122082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.1122233Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.1122595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.1122749Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1123108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1123220Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1123232Z 2025-08-14T21:55:50.1123328Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1123464Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1123708Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1123821Z return mod(**inputs) 2025-08-14T21:55:50.1124181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1124268Z outputs = self.mobilebert( 2025-08-14T21:55:50.1124645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1124735Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1125085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1125180Z layer_outputs = layer_module( 2025-08-14T21:55:50.1125528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.1125740Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.1126117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.1126272Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.1130876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.1130986Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.1131339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1131456Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1131469Z 2025-08-14T21:55:50.1131563Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1131662Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1131753Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1131845Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1131942Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1132032Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1132122Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1132244Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1132339Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1132430Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1132563Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1132828Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1132917Z return mod(**inputs) 2025-08-14T21:55:50.1133266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1133354Z outputs = self.mobilebert( 2025-08-14T21:55:50.1133710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1133802Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1134158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1134247Z layer_outputs = layer_module( 2025-08-14T21:55:50.1134596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.1134709Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.1135060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.1135212Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.1135573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.1135748Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1136105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1136215Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1136250Z 2025-08-14T21:55:50.1136346Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1136481Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1136727Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1136815Z return mod(**inputs) 2025-08-14T21:55:50.1137172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1137259Z outputs = self.mobilebert( 2025-08-14T21:55:50.1137619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1137715Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1138062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1138164Z layer_outputs = layer_module( 2025-08-14T21:55:50.1138514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1138638Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1138991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1139129Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1139483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1139621Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1139633Z 2025-08-14T21:55:50.1139735Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1139883Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1140137Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1140228Z return mod(**inputs) 2025-08-14T21:55:50.1140599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1140741Z outputs = self.mobilebert( 2025-08-14T21:55:50.1141178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1141272Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1141630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1141722Z layer_outputs = layer_module( 2025-08-14T21:55:50.1142075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1142205Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1142556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1142713Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1143061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1143210Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1143567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1143699Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1143711Z 2025-08-14T21:55:50.1143805Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1143936Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1144228Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1144321Z return mod(**inputs) 2025-08-14T21:55:50.1144672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1144759Z outputs = self.mobilebert( 2025-08-14T21:55:50.1145119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1145208Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1145562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1145651Z layer_outputs = layer_module( 2025-08-14T21:55:50.1145997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1146122Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1146468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1146605Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1146959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1147094Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1147107Z 2025-08-14T21:55:50.1147209Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1147332Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1147578Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1147663Z return mod(**inputs) 2025-08-14T21:55:50.1148039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1148140Z outputs = self.mobilebert( 2025-08-14T21:55:50.1148488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1148597Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1149297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1149388Z layer_outputs = layer_module( 2025-08-14T21:55:50.1149742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1149869Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1150218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1150381Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1150730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1150878Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1151241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1151352Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1151365Z 2025-08-14T21:55:50.1151466Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1151592Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1151892Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1151979Z return mod(**inputs) 2025-08-14T21:55:50.1152329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1152442Z outputs = self.mobilebert( 2025-08-14T21:55:50.1152798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1152888Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1153240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1153327Z layer_outputs = layer_module( 2025-08-14T21:55:50.1153677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1153801Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1154149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1154293Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1154638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1154776Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1154788Z 2025-08-14T21:55:50.1154892Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1155017Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1155317Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1155409Z return mod(**inputs) 2025-08-14T21:55:50.1159935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1160032Z outputs = self.mobilebert( 2025-08-14T21:55:50.1160413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1160505Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1160857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1160982Z layer_outputs = layer_module( 2025-08-14T21:55:50.1161450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1161576Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1161929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1162093Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1162450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1162606Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1162961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1163076Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1163089Z 2025-08-14T21:55:50.1163191Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1163317Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1163564Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1163652Z return mod(**inputs) 2025-08-14T21:55:50.1164026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1164116Z outputs = self.mobilebert( 2025-08-14T21:55:50.1164476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1164589Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1164945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1165035Z layer_outputs = layer_module( 2025-08-14T21:55:50.1165384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.1165541Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.1165889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1166038Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1166050Z 2025-08-14T21:55:50.1166150Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1166277Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1166532Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1166612Z return mod(**inputs) 2025-08-14T21:55:50.1166963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1167056Z outputs = self.mobilebert( 2025-08-14T21:55:50.1167403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1167498Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1167846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1167938Z layer_outputs = layer_module( 2025-08-14T21:55:50.1168316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1168518Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1168871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.1169041Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.1169395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1169516Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1169528Z 2025-08-14T21:55:50.1169673Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1169807Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1170121Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1170206Z return mod(**inputs) 2025-08-14T21:55:50.1170563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1170650Z outputs = self.mobilebert( 2025-08-14T21:55:50.1171001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1171098Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1171442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1171534Z layer_outputs = layer_module( 2025-08-14T21:55:50.1171884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1172103Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1172457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.1172630Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.1172989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.1173135Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1173482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1173597Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1173613Z 2025-08-14T21:55:50.1173708Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1173833Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1174088Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1174172Z return mod(**inputs) 2025-08-14T21:55:50.1174526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1174612Z outputs = self.mobilebert( 2025-08-14T21:55:50.1174961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1175061Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1175409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1175502Z layer_outputs = layer_module( 2025-08-14T21:55:50.1175854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.1176078Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.1176436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.1176570Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.1176937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.1177050Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.1177397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1177515Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1177529Z 2025-08-14T21:55:50.1177623Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1177720Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1177818Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1177914Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1178010Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1178107Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1178203Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1178302Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1178394Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1178485Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1178620Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1178867Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1178946Z return mod(**inputs) 2025-08-14T21:55:50.1179302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1179415Z outputs = self.mobilebert( 2025-08-14T21:55:50.1179773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1179885Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1180235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1180337Z layer_outputs = layer_module( 2025-08-14T21:55:50.1180685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.1180792Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.1181146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.1181299Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.1181661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.1181812Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1182164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1182285Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1182299Z 2025-08-14T21:55:50.1182393Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1182524Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1182770Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1182849Z return mod(**inputs) 2025-08-14T21:55:50.1183205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1183295Z outputs = self.mobilebert( 2025-08-14T21:55:50.1183667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1183767Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1184154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1184290Z layer_outputs = layer_module( 2025-08-14T21:55:50.1188887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1189012Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1189373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1189518Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1189886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1190024Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1190038Z 2025-08-14T21:55:50.1190137Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1190276Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1190526Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1190610Z return mod(**inputs) 2025-08-14T21:55:50.1190973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1191064Z outputs = self.mobilebert( 2025-08-14T21:55:50.1191433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1191554Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1191902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1192021Z layer_outputs = layer_module( 2025-08-14T21:55:50.1192369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1192494Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1192845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1192998Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1193356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1193506Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1193856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1193978Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1193991Z 2025-08-14T21:55:50.1194085Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1194218Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1194465Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1194545Z return mod(**inputs) 2025-08-14T21:55:50.1194898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1194985Z outputs = self.mobilebert( 2025-08-14T21:55:50.1195340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1195431Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1195811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1195905Z layer_outputs = layer_module( 2025-08-14T21:55:50.1196254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1196390Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1196749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1196884Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1197236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1197373Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1197385Z 2025-08-14T21:55:50.1197478Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1197608Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1197858Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1197946Z return mod(**inputs) 2025-08-14T21:55:50.1198295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1198381Z outputs = self.mobilebert( 2025-08-14T21:55:50.1198795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1198885Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1199302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1199417Z layer_outputs = layer_module( 2025-08-14T21:55:50.1199771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1199915Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1200266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1200420Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1200777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1200925Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1201350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1201462Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1201474Z 2025-08-14T21:55:50.1201568Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1201702Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1201950Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1202033Z return mod(**inputs) 2025-08-14T21:55:50.1202389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1202477Z outputs = self.mobilebert( 2025-08-14T21:55:50.1202837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1202929Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1203276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1203377Z layer_outputs = layer_module( 2025-08-14T21:55:50.1203752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1203881Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1204229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1204409Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1204766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1204902Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1204915Z 2025-08-14T21:55:50.1205010Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1205143Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1205388Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1205477Z return mod(**inputs) 2025-08-14T21:55:50.1205826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1205915Z outputs = self.mobilebert( 2025-08-14T21:55:50.1206270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1206363Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1206718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1206809Z layer_outputs = layer_module( 2025-08-14T21:55:50.1207157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1207301Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1207654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1207824Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1208180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1208330Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1208685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1208795Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1208808Z 2025-08-14T21:55:50.1208907Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1209048Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1209294Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1209386Z return mod(**inputs) 2025-08-14T21:55:50.1209736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1209824Z outputs = self.mobilebert( 2025-08-14T21:55:50.1210180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1210269Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1210616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1210710Z layer_outputs = layer_module( 2025-08-14T21:55:50.1211056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.1211210Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.1211582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1211717Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1211730Z 2025-08-14T21:55:50.1211835Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1211959Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1212229Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1212314Z return mod(**inputs) 2025-08-14T21:55:50.1212661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1212754Z outputs = self.mobilebert( 2025-08-14T21:55:50.1213135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1213240Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1217837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1217928Z layer_outputs = layer_module( 2025-08-14T21:55:50.1218282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1218480Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1218828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.1218982Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.1219328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1219474Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1219486Z 2025-08-14T21:55:50.1219583Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1219731Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1219987Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1220068Z return mod(**inputs) 2025-08-14T21:55:50.1220423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1220517Z outputs = self.mobilebert( 2025-08-14T21:55:50.1220865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1220967Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1221320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1221433Z layer_outputs = layer_module( 2025-08-14T21:55:50.1221808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1222009Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1222368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.1222517Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.1222865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.1223019Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1223366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1223476Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1223517Z 2025-08-14T21:55:50.1223613Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1223738Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1223989Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1224068Z return mod(**inputs) 2025-08-14T21:55:50.1224439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1224536Z outputs = self.mobilebert( 2025-08-14T21:55:50.1224892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1224990Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1225343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1225433Z layer_outputs = layer_module( 2025-08-14T21:55:50.1225788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.1225989Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.1226341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.1226484Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.1226834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.1226949Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.1227320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1227435Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1227447Z 2025-08-14T21:55:50.1227600Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1227725Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1227829Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1227926Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1228095Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1228197Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1228288Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1228380Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1228479Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1228572Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1228698Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1228953Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1229035Z return mod(**inputs) 2025-08-14T21:55:50.1229400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1229488Z outputs = self.mobilebert( 2025-08-14T21:55:50.1229841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1229942Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1230289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1230375Z layer_outputs = layer_module( 2025-08-14T21:55:50.1230733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.1230840Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.1231221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.1231373Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.1231727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.1231891Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1232265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1232386Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1232399Z 2025-08-14T21:55:50.1232490Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1232617Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1232874Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1232959Z return mod(**inputs) 2025-08-14T21:55:50.1233309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1233410Z outputs = self.mobilebert( 2025-08-14T21:55:50.1233758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1233856Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1234205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1234291Z layer_outputs = layer_module( 2025-08-14T21:55:50.1234646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1234785Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1235140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1235299Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1235646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1235788Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1235802Z 2025-08-14T21:55:50.1235895Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1236019Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1236272Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1236352Z return mod(**inputs) 2025-08-14T21:55:50.1236706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1236796Z outputs = self.mobilebert( 2025-08-14T21:55:50.1237146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1237244Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1237591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1237687Z layer_outputs = layer_module( 2025-08-14T21:55:50.1238032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1238146Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1238497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1238655Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1239022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1239179Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1239525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1239643Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1239677Z 2025-08-14T21:55:50.1239772Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1239898Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1240147Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1240226Z return mod(**inputs) 2025-08-14T21:55:50.1240580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1240669Z outputs = self.mobilebert( 2025-08-14T21:55:50.1241019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1241198Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1241549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1241638Z layer_outputs = layer_module( 2025-08-14T21:55:50.1241993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1242147Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1251095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1251323Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1251807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1252004Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1252019Z 2025-08-14T21:55:50.1252118Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1252268Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1252596Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1252680Z return mod(**inputs) 2025-08-14T21:55:50.1253172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1253263Z outputs = self.mobilebert( 2025-08-14T21:55:50.1253685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1253782Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1254137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1254234Z layer_outputs = layer_module( 2025-08-14T21:55:50.1254586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1254706Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1255062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1255216Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1255572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1255726Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1256101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1256220Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1256232Z 2025-08-14T21:55:50.1256326Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1256456Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1256788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1256874Z return mod(**inputs) 2025-08-14T21:55:50.1259332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1259420Z outputs = self.mobilebert( 2025-08-14T21:55:50.1259769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1259868Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1260215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1260313Z layer_outputs = layer_module( 2025-08-14T21:55:50.1260664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1260780Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1261140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1261278Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1261626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1261792Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1261805Z 2025-08-14T21:55:50.1261900Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1262037Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1262305Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1262384Z return mod(**inputs) 2025-08-14T21:55:50.1262755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1262876Z outputs = self.mobilebert( 2025-08-14T21:55:50.1263228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1263318Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1263666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1263762Z layer_outputs = layer_module( 2025-08-14T21:55:50.1264110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1264225Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1264587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1264744Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1265102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1265250Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1265598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1265715Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1265727Z 2025-08-14T21:55:50.1265821Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1265975Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1266228Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1266310Z return mod(**inputs) 2025-08-14T21:55:50.1266684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1266771Z outputs = self.mobilebert( 2025-08-14T21:55:50.1267117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1267211Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1267559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1267655Z layer_outputs = layer_module( 2025-08-14T21:55:50.1268003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.1268150Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.1268503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1268645Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1268657Z 2025-08-14T21:55:50.1268758Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1268879Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1269120Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1269209Z return mod(**inputs) 2025-08-14T21:55:50.1275798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1275939Z outputs = self.mobilebert( 2025-08-14T21:55:50.1276332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1276498Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1276861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1276961Z layer_outputs = layer_module( 2025-08-14T21:55:50.1277313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1277515Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1277884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.1278045Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.1278411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1278528Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1278542Z 2025-08-14T21:55:50.1278645Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1278788Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1279045Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1279136Z return mod(**inputs) 2025-08-14T21:55:50.1279490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1279581Z outputs = self.mobilebert( 2025-08-14T21:55:50.1279942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1280063Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1280413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1280512Z layer_outputs = layer_module( 2025-08-14T21:55:50.1280888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1281194Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1281550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.1281703Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.1282066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.1282219Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1282581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1282696Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1282709Z 2025-08-14T21:55:50.1282811Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1282959Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1283212Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1283306Z return mod(**inputs) 2025-08-14T21:55:50.1283659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1283776Z outputs = self.mobilebert( 2025-08-14T21:55:50.1284135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1284227Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1284607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1284702Z layer_outputs = layer_module( 2025-08-14T21:55:50.1285051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.1285264Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.1285651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.1285815Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.1290449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.1290560Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.1290924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1291502Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1291707Z 2025-08-14T21:55:50.1291825Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1292117Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1292389Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1292636Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1292874Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1293119Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1293367Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1293610Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1293855Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1294104Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1294417Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1294871Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1295278Z return mod(**inputs) 2025-08-14T21:55:50.1295787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1296307Z outputs = self.mobilebert( 2025-08-14T21:55:50.1296806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1297332Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1297836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1298348Z layer_outputs = layer_module( 2025-08-14T21:55:50.1298856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.1299391Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.1299921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.1300625Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.1301209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.1301790Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1302365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1302930Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1303125Z 2025-08-14T21:55:50.1303223Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1303517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1303980Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1304384Z return mod(**inputs) 2025-08-14T21:55:50.1304869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1305390Z outputs = self.mobilebert( 2025-08-14T21:55:50.1305878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1306392Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1306903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1307410Z layer_outputs = layer_module( 2025-08-14T21:55:50.1307912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1308458Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1308994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1309544Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1310111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1310671Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1310881Z 2025-08-14T21:55:50.1310987Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1311273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1311718Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1312124Z return mod(**inputs) 2025-08-14T21:55:50.1312622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1313143Z outputs = self.mobilebert( 2025-08-14T21:55:50.1313633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1314169Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1314718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1319454Z layer_outputs = layer_module( 2025-08-14T21:55:50.1319962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1320502Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1321099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1321696Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1322279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1322849Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1323421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1323965Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1324152Z 2025-08-14T21:55:50.1324260Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1324544Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1325022Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1325431Z return mod(**inputs) 2025-08-14T21:55:50.1325912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1326469Z outputs = self.mobilebert( 2025-08-14T21:55:50.1326963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1327490Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1327991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1328505Z layer_outputs = layer_module( 2025-08-14T21:55:50.1329014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1329690Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1330228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1330794Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1331360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1331912Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1332139Z 2025-08-14T21:55:50.1332237Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1332533Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1332978Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1333371Z return mod(**inputs) 2025-08-14T21:55:50.1333860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1334376Z outputs = self.mobilebert( 2025-08-14T21:55:50.1334897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1335406Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1335910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1336464Z layer_outputs = layer_module( 2025-08-14T21:55:50.1336961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1337504Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1338037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1338612Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1339187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1339760Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1340332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1340872Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1341056Z 2025-08-14T21:55:50.1341159Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1341446Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1341890Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1342287Z return mod(**inputs) 2025-08-14T21:55:50.1342793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1343303Z outputs = self.mobilebert( 2025-08-14T21:55:50.1343849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1348619Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1349477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1349997Z layer_outputs = layer_module( 2025-08-14T21:55:50.1350495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1351027Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1351562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1352131Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1352688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1353254Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1353471Z 2025-08-14T21:55:50.1353572Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1353863Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1354300Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1354704Z return mod(**inputs) 2025-08-14T21:55:50.1355185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1355686Z outputs = self.mobilebert( 2025-08-14T21:55:50.1356187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1356703Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1357276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1357787Z layer_outputs = layer_module( 2025-08-14T21:55:50.1358350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1359004Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1359543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1360116Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1360704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1361365Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1361940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1362473Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1362669Z 2025-08-14T21:55:50.1362768Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1363060Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1363500Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1363909Z return mod(**inputs) 2025-08-14T21:55:50.1364394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1364907Z outputs = self.mobilebert( 2025-08-14T21:55:50.1365401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1365953Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1366461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1366997Z layer_outputs = layer_module( 2025-08-14T21:55:50.1367493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.1368069Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.1368640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1369192Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1369413Z 2025-08-14T21:55:50.1369514Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1369803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1370246Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1370649Z return mod(**inputs) 2025-08-14T21:55:50.1371132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1371638Z outputs = self.mobilebert( 2025-08-14T21:55:50.1372133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1372695Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1381716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1382528Z layer_outputs = layer_module( 2025-08-14T21:55:50.1383194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1384060Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1384751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.1385583Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.1386405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1387025Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1387265Z 2025-08-14T21:55:50.1387367Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1387737Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1388180Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1388584Z return mod(**inputs) 2025-08-14T21:55:50.1389057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1389624Z outputs = self.mobilebert( 2025-08-14T21:55:50.1390118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1390625Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1391135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1391647Z layer_outputs = layer_module( 2025-08-14T21:55:50.1392140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1392748Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1393405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.1393983Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.1394556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.1395149Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1395720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1396261Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1396449Z 2025-08-14T21:55:50.1396556Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1396838Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1397284Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1397696Z return mod(**inputs) 2025-08-14T21:55:50.1398170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1398686Z outputs = self.mobilebert( 2025-08-14T21:55:50.1399182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1399696Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1400202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1400714Z layer_outputs = layer_module( 2025-08-14T21:55:50.1401298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.1402041Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.1402682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.1403272Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.1403831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.1404352Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.1404898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1405432Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1405618Z 2025-08-14T21:55:50.1405725Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1405979Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1406225Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1406477Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1406714Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1406957Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1407206Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1407442Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1407689Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1407935Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1408211Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1408648Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1409054Z return mod(**inputs) 2025-08-14T21:55:50.1409534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1410034Z outputs = self.mobilebert( 2025-08-14T21:55:50.1410526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1411063Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1411572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1412101Z layer_outputs = layer_module( 2025-08-14T21:55:50.1412596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.1413119Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.1413636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.1414211Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.1414779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.1415358Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1415924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1422877Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1423084Z 2025-08-14T21:55:50.1423184Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1423475Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1423920Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1424325Z return mod(**inputs) 2025-08-14T21:55:50.1424810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1425316Z outputs = self.mobilebert( 2025-08-14T21:55:50.1425813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1426334Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1426894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1427403Z layer_outputs = layer_module( 2025-08-14T21:55:50.1427909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1428476Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1429016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1429570Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1430132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1430748Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1431024Z 2025-08-14T21:55:50.1431124Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1431414Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1431874Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1432283Z return mod(**inputs) 2025-08-14T21:55:50.1432757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1433281Z outputs = self.mobilebert( 2025-08-14T21:55:50.1433785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1434296Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1434797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1435333Z layer_outputs = layer_module( 2025-08-14T21:55:50.1435831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1436395Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1436933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1437516Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1438101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1438663Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1439245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1439788Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1439974Z 2025-08-14T21:55:50.1440082Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1440364Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1440813Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1441310Z return mod(**inputs) 2025-08-14T21:55:50.1441786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1442297Z outputs = self.mobilebert( 2025-08-14T21:55:50.1442790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1443304Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1443802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1444322Z layer_outputs = layer_module( 2025-08-14T21:55:50.1444854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1445445Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1450420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1451050Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1451612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1452166Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1452385Z 2025-08-14T21:55:50.1452485Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1452779Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1453233Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1453637Z return mod(**inputs) 2025-08-14T21:55:50.1454121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1454633Z outputs = self.mobilebert( 2025-08-14T21:55:50.1455122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1455639Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1456142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1456654Z layer_outputs = layer_module( 2025-08-14T21:55:50.1457148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1457718Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1458259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1458867Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1459440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1460155Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1460732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1461258Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1461451Z 2025-08-14T21:55:50.1461553Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1461846Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1462288Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1462678Z return mod(**inputs) 2025-08-14T21:55:50.1463162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1463673Z outputs = self.mobilebert( 2025-08-14T21:55:50.1464172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1464681Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1465186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1465689Z layer_outputs = layer_module( 2025-08-14T21:55:50.1466175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1466714Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1467306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1467869Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1468420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1469002Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1469208Z 2025-08-14T21:55:50.1469316Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1469609Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1470042Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1470447Z return mod(**inputs) 2025-08-14T21:55:50.1470937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1471445Z outputs = self.mobilebert( 2025-08-14T21:55:50.1471943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1472455Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1472961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1473469Z layer_outputs = layer_module( 2025-08-14T21:55:50.1473967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1478754Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1479285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1479892Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1480479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1481145Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1481715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1482254Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1482449Z 2025-08-14T21:55:50.1482550Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1482842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1483280Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1483686Z return mod(**inputs) 2025-08-14T21:55:50.1484182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1484692Z outputs = self.mobilebert( 2025-08-14T21:55:50.1485189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1485707Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1486209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1486720Z layer_outputs = layer_module( 2025-08-14T21:55:50.1487233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.1487808Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.1488374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1489056Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1489277Z 2025-08-14T21:55:50.1489406Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1489696Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1490130Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1490531Z return mod(**inputs) 2025-08-14T21:55:50.1491044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1491553Z outputs = self.mobilebert( 2025-08-14T21:55:50.1492040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1492558Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1493069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1493575Z layer_outputs = layer_module( 2025-08-14T21:55:50.1494075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1494694Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1495315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.1495878Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.1496448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1496979Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1497161Z 2025-08-14T21:55:50.1497290Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1497567Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1498008Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1498411Z return mod(**inputs) 2025-08-14T21:55:50.1498918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1499429Z outputs = self.mobilebert( 2025-08-14T21:55:50.1499927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1500438Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1500934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1501454Z layer_outputs = layer_module( 2025-08-14T21:55:50.1501951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1502570Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1503237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.1508053Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.1508630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.1509208Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1509773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1510311Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1510499Z 2025-08-14T21:55:50.1510614Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1510897Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1511390Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1511794Z return mod(**inputs) 2025-08-14T21:55:50.1512278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1512784Z outputs = self.mobilebert( 2025-08-14T21:55:50.1513301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1513827Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1514327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1514847Z layer_outputs = layer_module( 2025-08-14T21:55:50.1515362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.1515990Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.1516611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.1517169Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.1517780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.1518382Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.1518898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1519435Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1519651Z 2025-08-14T21:55:50.1519757Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1520011Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1520257Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1520505Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1520776Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1521011Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1521360Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1521606Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1521841Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1522088Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1522368Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1522805Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1523213Z return mod(**inputs) 2025-08-14T21:55:50.1523696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1524211Z outputs = self.mobilebert( 2025-08-14T21:55:50.1524700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1525217Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1525727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1526247Z layer_outputs = layer_module( 2025-08-14T21:55:50.1526740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.1527275Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.1527806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.1528371Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.1528968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.1529552Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1530124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1530651Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1530872Z 2025-08-14T21:55:50.1530970Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1531256Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1531703Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1532132Z return mod(**inputs) 2025-08-14T21:55:50.1541129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1541825Z outputs = self.mobilebert( 2025-08-14T21:55:50.1542477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1543164Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1543699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1544217Z layer_outputs = layer_module( 2025-08-14T21:55:50.1544725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1545269Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1545819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1546402Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1549038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1549661Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1549875Z 2025-08-14T21:55:50.1549984Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1550265Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1550711Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1551115Z return mod(**inputs) 2025-08-14T21:55:50.1551596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1552095Z outputs = self.mobilebert( 2025-08-14T21:55:50.1552589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1553107Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1553606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1554118Z layer_outputs = layer_module( 2025-08-14T21:55:50.1554619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1555156Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1555683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1556254Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1556833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1557408Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1558007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1558547Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1558732Z 2025-08-14T21:55:50.1558839Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1559117Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1559586Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1559996Z return mod(**inputs) 2025-08-14T21:55:50.1560472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1560975Z outputs = self.mobilebert( 2025-08-14T21:55:50.1561646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1562162Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1562660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1563174Z layer_outputs = layer_module( 2025-08-14T21:55:50.1563675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1564213Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1564741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1565309Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1565872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1567572Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1567781Z 2025-08-14T21:55:50.1567884Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1568177Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1568647Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1569041Z return mod(**inputs) 2025-08-14T21:55:50.1569527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1570037Z outputs = self.mobilebert( 2025-08-14T21:55:50.1570530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1571036Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1571541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1572052Z layer_outputs = layer_module( 2025-08-14T21:55:50.1572566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1573103Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1573644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1574226Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1574798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1575363Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1580519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1581061Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1581246Z 2025-08-14T21:55:50.1581345Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1581662Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1582108Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1582514Z return mod(**inputs) 2025-08-14T21:55:50.1583014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1583524Z outputs = self.mobilebert( 2025-08-14T21:55:50.1584018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1584518Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1585019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1585534Z layer_outputs = layer_module( 2025-08-14T21:55:50.1586030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1586561Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1587091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1587647Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1588201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1588342Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1588355Z 2025-08-14T21:55:50.1588452Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1588622Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1588872Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1588958Z return mod(**inputs) 2025-08-14T21:55:50.1589319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1589431Z outputs = self.mobilebert( 2025-08-14T21:55:50.1589791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1589884Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1590287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1590385Z layer_outputs = layer_module( 2025-08-14T21:55:50.1590801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1590931Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1591286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1591443Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1591797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1591951Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1592298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1592418Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1592431Z 2025-08-14T21:55:50.1592534Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1592673Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1592923Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1593030Z return mod(**inputs) 2025-08-14T21:55:50.1593387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1593478Z outputs = self.mobilebert( 2025-08-14T21:55:50.1593858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1593951Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1594305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1594401Z layer_outputs = layer_module( 2025-08-14T21:55:50.1594749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.1594901Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.1595260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1595399Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1595412Z 2025-08-14T21:55:50.1595518Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1595650Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1595902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1595994Z return mod(**inputs) 2025-08-14T21:55:50.1596343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1596443Z outputs = self.mobilebert( 2025-08-14T21:55:50.1596819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1596909Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1597275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1597390Z layer_outputs = layer_module( 2025-08-14T21:55:50.1597738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1597954Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1598307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.1598470Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.1598817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1598933Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1598945Z 2025-08-14T21:55:50.1599052Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1599181Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1599433Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1599515Z return mod(**inputs) 2025-08-14T21:55:50.1599871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1599968Z outputs = self.mobilebert( 2025-08-14T21:55:50.1600316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1600405Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1600759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1600847Z layer_outputs = layer_module( 2025-08-14T21:55:50.1601327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1601529Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1601916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.1602077Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.1602425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.1602581Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1602930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1603042Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1603054Z 2025-08-14T21:55:50.1603166Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1603291Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1603537Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1603624Z return mod(**inputs) 2025-08-14T21:55:50.1603973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1604066Z outputs = self.mobilebert( 2025-08-14T21:55:50.1604412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1604526Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1604936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1609243Z layer_outputs = layer_module( 2025-08-14T21:55:50.1609626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.1609831Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.1610184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.1610325Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.1610673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.1610779Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.1611138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1611253Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1611267Z 2025-08-14T21:55:50.1611371Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1611465Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1611558Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1611658Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1611755Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1611847Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1611950Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1612041Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1612138Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1612231Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1612358Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1612615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1612697Z return mod(**inputs) 2025-08-14T21:55:50.1613073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1613173Z outputs = self.mobilebert( 2025-08-14T21:55:50.1613523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1613639Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1613986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1614075Z layer_outputs = layer_module( 2025-08-14T21:55:50.1614432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.1614542Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.1614891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.1615057Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.1615406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.1615573Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1615930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1616042Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1616054Z 2025-08-14T21:55:50.1616160Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1616288Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1616568Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1616653Z return mod(**inputs) 2025-08-14T21:55:50.1617008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1617127Z outputs = self.mobilebert( 2025-08-14T21:55:50.1617475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1617567Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1617920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1618009Z layer_outputs = layer_module( 2025-08-14T21:55:50.1618366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1618487Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1618840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1618992Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1619399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1619618Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1619631Z 2025-08-14T21:55:50.1619727Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1619853Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1620106Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1620189Z return mod(**inputs) 2025-08-14T21:55:50.1620544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1620643Z outputs = self.mobilebert( 2025-08-14T21:55:50.1621020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1621123Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1621475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1621594Z layer_outputs = layer_module( 2025-08-14T21:55:50.1621950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1622065Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1622422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1622579Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1622929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1623088Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1623439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1623554Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1623573Z 2025-08-14T21:55:50.1623669Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1623796Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1624050Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1624129Z return mod(**inputs) 2025-08-14T21:55:50.1624502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1624602Z outputs = self.mobilebert( 2025-08-14T21:55:50.1624955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1625089Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1625438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1625526Z layer_outputs = layer_module( 2025-08-14T21:55:50.1625880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1626000Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1626348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1626492Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1626842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1626982Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1626995Z 2025-08-14T21:55:50.1627089Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1627212Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1627464Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1627546Z return mod(**inputs) 2025-08-14T21:55:50.1627901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1627990Z outputs = self.mobilebert( 2025-08-14T21:55:50.1628342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1628443Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1628816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1628904Z layer_outputs = layer_module( 2025-08-14T21:55:50.1629258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1629393Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1629747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1629897Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1630247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1630404Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1630756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1630877Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1630889Z 2025-08-14T21:55:50.1630985Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1631109Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1631361Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1631441Z return mod(**inputs) 2025-08-14T21:55:50.1631792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1631891Z outputs = self.mobilebert( 2025-08-14T21:55:50.1632243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1632362Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1632717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1632831Z layer_outputs = layer_module( 2025-08-14T21:55:50.1633188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1633304Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1633718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1633858Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1638448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1638596Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1638609Z 2025-08-14T21:55:50.1638708Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1638839Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1639096Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1639182Z return mod(**inputs) 2025-08-14T21:55:50.1639546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1639639Z outputs = self.mobilebert( 2025-08-14T21:55:50.1639989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1640087Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1640436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1640533Z layer_outputs = layer_module( 2025-08-14T21:55:50.1640906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1641026Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1641475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1641654Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1642008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1642169Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1642519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1642639Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1642652Z 2025-08-14T21:55:50.1642749Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1642876Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1643133Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1643214Z return mod(**inputs) 2025-08-14T21:55:50.1643573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1643662Z outputs = self.mobilebert( 2025-08-14T21:55:50.1644009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1644112Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1644463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1644575Z layer_outputs = layer_module( 2025-08-14T21:55:50.1644935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.1645107Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.1645463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1645601Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1645614Z 2025-08-14T21:55:50.1645709Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1645839Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1646086Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1646177Z return mod(**inputs) 2025-08-14T21:55:50.1646528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1646618Z outputs = self.mobilebert( 2025-08-14T21:55:50.1646973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1647064Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1647412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1647504Z layer_outputs = layer_module( 2025-08-14T21:55:50.1647854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1648070Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1648527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.1648999Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.1649423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1649537Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1649550Z 2025-08-14T21:55:50.1649659Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1649811Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1650058Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1650148Z return mod(**inputs) 2025-08-14T21:55:50.1650497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1650588Z outputs = self.mobilebert( 2025-08-14T21:55:50.1650944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1651034Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1651388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1651476Z layer_outputs = layer_module( 2025-08-14T21:55:50.1651824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1652032Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1652382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.1652542Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.1652925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.1653084Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1653445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1653585Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1653597Z 2025-08-14T21:55:50.1653701Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1653832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1654082Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1654167Z return mod(**inputs) 2025-08-14T21:55:50.1654518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1654606Z outputs = self.mobilebert( 2025-08-14T21:55:50.1654964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1655057Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1655418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1655507Z layer_outputs = layer_module( 2025-08-14T21:55:50.1655855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:55:50.1656061Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:55:50.1656414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:55:50.1656548Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:55:50.1656903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:55:50.1657046Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:55:50.1657401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1657508Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1657520Z 2025-08-14T21:55:50.1657614Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1657738Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1657831Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1657929Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1658022Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1658113Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1658211Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1658305Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1658396Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1658498Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1658631Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1658882Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1658971Z return mod(**inputs) 2025-08-14T21:55:50.1659321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1659417Z outputs = self.mobilebert( 2025-08-14T21:55:50.1659766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1659855Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1660208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1660319Z layer_outputs = layer_module( 2025-08-14T21:55:50.1660668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:55:50.1660805Z self_attention_outputs = self.attention( 2025-08-14T21:55:50.1661153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:55:50.1661312Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:55:50.1661659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:55:50.1661808Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1662168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1662279Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1662291Z 2025-08-14T21:55:50.1662393Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1662520Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1662830Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1662922Z return mod(**inputs) 2025-08-14T21:55:50.1667463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1667556Z outputs = self.mobilebert( 2025-08-14T21:55:50.1667922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1668014Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1668369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1668460Z layer_outputs = layer_module( 2025-08-14T21:55:50.1668840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1668969Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1669320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1669495Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1669844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1669982Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1669995Z 2025-08-14T21:55:50.1670102Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1670233Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1670484Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1670577Z return mod(**inputs) 2025-08-14T21:55:50.1670977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1671075Z outputs = self.mobilebert( 2025-08-14T21:55:50.1671424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1671513Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1671868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1671955Z layer_outputs = layer_module( 2025-08-14T21:55:50.1672312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1672452Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1672802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1672986Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1673334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1673484Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1673833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1673944Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1673956Z 2025-08-14T21:55:50.1674058Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1674186Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1682500Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1682644Z return mod(**inputs) 2025-08-14T21:55:50.1683079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1683180Z outputs = self.mobilebert( 2025-08-14T21:55:50.1683567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1683667Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1684031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1684138Z layer_outputs = layer_module( 2025-08-14T21:55:50.1684497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1684626Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1685072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1685222Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1685585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1685753Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1685768Z 2025-08-14T21:55:50.1685875Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1686024Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1686285Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1686381Z return mod(**inputs) 2025-08-14T21:55:50.1686741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1686841Z outputs = self.mobilebert( 2025-08-14T21:55:50.1687208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1687307Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1687657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1687758Z layer_outputs = layer_module( 2025-08-14T21:55:50.1688110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1688238Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1688588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1688776Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1689140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1689339Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1689698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1689817Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1689831Z 2025-08-14T21:55:50.1689932Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1690074Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1690329Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1690413Z return mod(**inputs) 2025-08-14T21:55:50.1690781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1690871Z outputs = self.mobilebert( 2025-08-14T21:55:50.1691234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1691329Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1691757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1691858Z layer_outputs = layer_module( 2025-08-14T21:55:50.1700786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1700930Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1701410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:55:50.1701573Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:55:50.1702090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1702258Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1702272Z 2025-08-14T21:55:50.1702379Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1702534Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1702890Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1702987Z return mod(**inputs) 2025-08-14T21:55:50.1703387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1703478Z outputs = self.mobilebert( 2025-08-14T21:55:50.1703840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1703933Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1704293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1704387Z layer_outputs = layer_module( 2025-08-14T21:55:50.1704739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:55:50.1704866Z attention_output = ffn_module(attention_output) 2025-08-14T21:55:50.1705215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:55:50.1705371Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:55:50.1705728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:55:50.1705902Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1706316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1708552Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1708566Z 2025-08-14T21:55:50.1708666Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1708803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1709057Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1709148Z return mod(**inputs) 2025-08-14T21:55:50.1709502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1709592Z outputs = self.mobilebert( 2025-08-14T21:55:50.1709956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1710048Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1710402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1710504Z layer_outputs = layer_module( 2025-08-14T21:55:50.1710854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:55:50.1711017Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:55:50.1711376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:55:50.1711514Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:55:50.1711527Z 2025-08-14T21:55:50.1711636Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1711765Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1712024Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1712139Z return mod(**inputs) 2025-08-14T21:55:50.1712548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1712649Z outputs = self.mobilebert( 2025-08-14T21:55:50.1713026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1713118Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1713480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1713570Z layer_outputs = layer_module( 2025-08-14T21:55:50.1713933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1714140Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1714492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:55:50.1714655Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:55:50.1715015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1715139Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1715151Z 2025-08-14T21:55:50.1715249Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1715376Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1715633Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1715738Z return mod(**inputs) 2025-08-14T21:55:50.1716092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:55:50.1716195Z outputs = self.mobilebert( 2025-08-14T21:55:50.1716569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:55:50.1716667Z encoder_outputs = self.encoder( 2025-08-14T21:55:50.1717023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:55:50.1717112Z layer_outputs = layer_module( 2025-08-14T21:55:50.1717469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:55:50.1717667Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:55:50.1718029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:55:50.1718181Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:55:50.1718530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:55:50.1718693Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:55:50.1719045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:55:50.1719157Z return input_tensor * self.weight + self.bias 2025-08-14T21:55:50.1719180Z 2025-08-14T21:55:50.1719286Z cudagraph partition due to non gpu ops 2025-08-14T21:55:50.1719411Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1719670Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1719756Z return mod(**inputs) 2025-08-14T21:55:50.1720126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 989, in forward 2025-08-14T21:55:50.1720256Z prediction_scores = self.cls(sequence_output) 2025-08-14T21:55:50.1720645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 643, in forward 2025-08-14T21:55:50.1720803Z prediction_scores = self.predictions(sequence_output) 2025-08-14T21:55:50.1721339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 632, in forward 2025-08-14T21:55:50.1721601Z hidden_states = hidden_states.matmul(torch.cat([self.decoder.weight.t(), self.dense.weight], dim=0)) 2025-08-14T21:55:50.1721615Z 2025-08-14T21:55:50.1721748Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1721998Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1722086Z return mod(**inputs) 2025-08-14T21:55:50.1722441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 994, in forward 2025-08-14T21:55:50.1722676Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:55:50.1722689Z 2025-08-14T21:55:50.1722823Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:50.1723072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:50.1723152Z return mod(**inputs) 2025-08-14T21:55:50.1723514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 989, in forward 2025-08-14T21:55:50.1723628Z prediction_scores = self.cls(sequence_output) 2025-08-14T21:55:50.1724010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 643, in forward 2025-08-14T21:55:50.1724145Z prediction_scores = self.predictions(sequence_output) 2025-08-14T21:55:50.1724499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 633, in forward 2025-08-14T21:55:50.1724631Z hidden_states += self.decoder.bias 2025-08-14T21:55:50.1724644Z 2025-08-14T21:56:03.4958525Z Compilation time (from dynamo_timed): 60.160960424 2025-08-14T21:56:03.4959200Z pass 2025-08-14T21:56:03.4965728Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:56:03.4967633Z TIMING: _recursive_pre_grad_passes:0.21744 _recursive_joint_graph_passes:1.91668 _recursive_post_grad_passes:0.28174 async_compile.wait:0.93498 code_gen:8.63739 inductor_compile:17.22122 backend_compile:43.53006 gc:0.00015 entire_frame_compile:60.16096 total_wall_time:60.16096 2025-08-14T21:56:03.4969760Z STATS: call_* op count: 1449 | FakeTensorMode.__torch_dispatch__:103338 | FakeTensor.__torch_dispatch__:12500 | ProxyTorchDispatchMode.__torch_dispatch__:23208 2025-08-14T21:56:03.4970934Z Dynamo produced 1 graphs covering 1449 ops with 0 graph breaks (0 unique) 2025-08-14T21:56:10.4217998Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:56:10.4219103Z from pkg_resources import resource_filename 2025-08-14T21:56:11.2095480Z 2025-08-14T21:56:12.1461668Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:56:12.1462007Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:56:12.1566308Z cpu eval MobileBertForQuestionAnswering 2025-08-14T21:56:12.6016089Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:56:12.9020402Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:56:13.1855851Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:57:01.2957530Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.2958411Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.2958889Z return mod(**inputs) 2025-08-14T21:57:01.2959680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.2960343Z outputs = self.mobilebert( 2025-08-14T21:57:01.2960923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 791, in forward 2025-08-14T21:57:01.2961544Z embedding_output = self.embeddings( 2025-08-14T21:57:01.2962077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 199, in forward 2025-08-14T21:57:01.2962651Z inputs_embeds = torch.cat( 2025-08-14T21:57:01.2962800Z 2025-08-14T21:57:01.2962944Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2963239Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.2963690Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.2964091Z return mod(**inputs) 2025-08-14T21:57:01.2964593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.2965111Z outputs = self.mobilebert( 2025-08-14T21:57:01.2965601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 791, in forward 2025-08-14T21:57:01.2966208Z embedding_output = self.embeddings( 2025-08-14T21:57:01.2966734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 215, in forward 2025-08-14T21:57:01.2967269Z embeddings = self.LayerNorm(embeddings) 2025-08-14T21:57:01.2967848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.2968396Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.2968595Z 2025-08-14T21:57:01.2968709Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2968997Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.2969455Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.2969870Z return mod(**inputs) 2025-08-14T21:57:01.2970359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.2970878Z outputs = self.mobilebert( 2025-08-14T21:57:01.2971376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.2971898Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.2972412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.2972924Z layer_outputs = layer_module( 2025-08-14T21:57:01.2973430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.2974064Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.2978944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.2979520Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.2980088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.2980698Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.2981226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.2981767Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.2981961Z 2025-08-14T21:57:01.2982094Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2982357Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2982601Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2982856Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2983110Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2983346Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2983594Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2983840Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2984078Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2984339Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2984633Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.2985085Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.2985479Z return mod(**inputs) 2025-08-14T21:57:01.2985972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.2986491Z outputs = self.mobilebert( 2025-08-14T21:57:01.2986982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.2987498Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.2988004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.2988551Z layer_outputs = layer_module( 2025-08-14T21:57:01.2989130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.2989751Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.2990288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.2990860Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.2991433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.2992014Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.2992598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.2993133Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.2993333Z 2025-08-14T21:57:01.2993435Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.2993778Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.2994228Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.2994635Z return mod(**inputs) 2025-08-14T21:57:01.2995131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.2995654Z outputs = self.mobilebert( 2025-08-14T21:57:01.2996143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.2996668Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.2997176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.2997694Z layer_outputs = layer_module( 2025-08-14T21:57:01.2998229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.2998785Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.2999326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.2999928Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3000488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3001116Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3001332Z 2025-08-14T21:57:01.3001449Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3001739Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3002184Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3002592Z return mod(**inputs) 2025-08-14T21:57:01.3003086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3012004Z outputs = self.mobilebert( 2025-08-14T21:57:01.3012663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3013354Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3014023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3014707Z layer_outputs = layer_module( 2025-08-14T21:57:01.3015275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3015852Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3016387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3017003Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3017587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3018237Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3018854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3019392Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3019588Z 2025-08-14T21:57:01.3019844Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3020139Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3020586Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3020993Z return mod(**inputs) 2025-08-14T21:57:01.3021479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3022003Z outputs = self.mobilebert( 2025-08-14T21:57:01.3022493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3023065Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3023578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3024093Z layer_outputs = layer_module( 2025-08-14T21:57:01.3024601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3025147Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3025733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3026298Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3026868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3027474Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3027689Z 2025-08-14T21:57:01.3027799Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3028084Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3028529Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3028934Z return mod(**inputs) 2025-08-14T21:57:01.3029418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3029936Z outputs = self.mobilebert( 2025-08-14T21:57:01.3030440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3031061Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3031596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3032646Z layer_outputs = layer_module( 2025-08-14T21:57:01.3033588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3034127Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3034672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3035298Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3035881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3036481Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3037054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3037591Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3037782Z 2025-08-14T21:57:01.3037888Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3038169Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3038615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3039017Z return mod(**inputs) 2025-08-14T21:57:01.3039503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3040027Z outputs = self.mobilebert( 2025-08-14T21:57:01.3040525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3041107Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3041628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3042160Z layer_outputs = layer_module( 2025-08-14T21:57:01.3043102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3044106Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3045087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3046156Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3053835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3054864Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3055276Z 2025-08-14T21:57:01.3055454Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3055802Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3056341Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3057051Z return mod(**inputs) 2025-08-14T21:57:01.3057964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3058930Z outputs = self.mobilebert( 2025-08-14T21:57:01.3059825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3060780Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3061750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3062339Z layer_outputs = layer_module( 2025-08-14T21:57:01.3062838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3063388Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3063930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3064514Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3065091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3065708Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3066284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3066856Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3067042Z 2025-08-14T21:57:01.3067144Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3067439Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3067944Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3068392Z return mod(**inputs) 2025-08-14T21:57:01.3068882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3069458Z outputs = self.mobilebert( 2025-08-14T21:57:01.3070002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3070510Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3071019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3071540Z layer_outputs = layer_module( 2025-08-14T21:57:01.3072098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.3072725Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.3073350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3073916Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3074127Z 2025-08-14T21:57:01.3074230Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3074521Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3074995Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3075398Z return mod(**inputs) 2025-08-14T21:57:01.3075887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3080653Z outputs = self.mobilebert( 2025-08-14T21:57:01.3081266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3081788Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3082290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3082810Z layer_outputs = layer_module( 2025-08-14T21:57:01.3083324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3083955Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3084603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.3085682Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.3086737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3087728Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3088070Z 2025-08-14T21:57:01.3088243Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3088746Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3089548Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3090317Z return mod(**inputs) 2025-08-14T21:57:01.3091316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3091986Z outputs = self.mobilebert( 2025-08-14T21:57:01.3092507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3093021Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3093526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3094041Z layer_outputs = layer_module( 2025-08-14T21:57:01.3094531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3095155Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3095830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.3096409Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.3096978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.3097554Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3098128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3098665Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3098856Z 2025-08-14T21:57:01.3098953Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3099243Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3099684Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3100185Z return mod(**inputs) 2025-08-14T21:57:01.3100805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3101461Z outputs = self.mobilebert( 2025-08-14T21:57:01.3102064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3102671Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3103243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3103761Z layer_outputs = layer_module( 2025-08-14T21:57:01.3104321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.3109311Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.3109956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.3110520Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.3111080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.3111618Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.3112149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3112688Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3112875Z 2025-08-14T21:57:01.3112976Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3113242Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3113504Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3113778Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3114192Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3114618Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3115071Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3115597Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3116028Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3116478Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3116970Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3117767Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3118514Z return mod(**inputs) 2025-08-14T21:57:01.3119405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3120258Z outputs = self.mobilebert( 2025-08-14T21:57:01.3120759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3121338Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3121838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3122353Z layer_outputs = layer_module( 2025-08-14T21:57:01.3122859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.3123383Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.3123910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.3124502Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.3125077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.3125717Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3126457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3127441Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3127782Z 2025-08-14T21:57:01.3127963Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3128470Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3129329Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3130068Z return mod(**inputs) 2025-08-14T21:57:01.3130958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3131907Z outputs = self.mobilebert( 2025-08-14T21:57:01.3132827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3133775Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3138664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3139181Z layer_outputs = layer_module( 2025-08-14T21:57:01.3139681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3140230Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3140763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3141327Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3141886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3142481Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3142696Z 2025-08-14T21:57:01.3142796Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3143089Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3143808Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3144434Z return mod(**inputs) 2025-08-14T21:57:01.3145327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3146316Z outputs = self.mobilebert( 2025-08-14T21:57:01.3146927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3147875Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3149104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3149628Z layer_outputs = layer_module( 2025-08-14T21:57:01.3150122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3150671Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3151214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3151794Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3152367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3152944Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3154025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3154603Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3154790Z 2025-08-14T21:57:01.3154964Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3155252Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3155701Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3156092Z return mod(**inputs) 2025-08-14T21:57:01.3157630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3158340Z outputs = self.mobilebert( 2025-08-14T21:57:01.3159279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3160227Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3161230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3162216Z layer_outputs = layer_module( 2025-08-14T21:57:01.3171438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3172488Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3173594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3174746Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3175768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3176816Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3177213Z 2025-08-14T21:57:01.3177390Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3179971Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3180411Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3180823Z return mod(**inputs) 2025-08-14T21:57:01.3181354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3181881Z outputs = self.mobilebert( 2025-08-14T21:57:01.3182755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3183318Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3183830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3184339Z layer_outputs = layer_module( 2025-08-14T21:57:01.3184838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3185387Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3185930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3186535Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3187491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3188556Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3189614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3190591Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3190919Z 2025-08-14T21:57:01.3191089Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3191608Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3192522Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3193263Z return mod(**inputs) 2025-08-14T21:57:01.3193757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3194283Z outputs = self.mobilebert( 2025-08-14T21:57:01.3194794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3195308Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3195811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3196332Z layer_outputs = layer_module( 2025-08-14T21:57:01.3196825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3197371Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3197961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3198528Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3199464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3200527Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3200761Z 2025-08-14T21:57:01.3200872Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3201240Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3201693Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3202133Z return mod(**inputs) 2025-08-14T21:57:01.3202635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3203207Z outputs = self.mobilebert( 2025-08-14T21:57:01.3203763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3204317Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3204864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3205440Z layer_outputs = layer_module( 2025-08-14T21:57:01.3205941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3210723Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3211313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3211910Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3212500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3213093Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3213669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3214491Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3214834Z 2025-08-14T21:57:01.3215016Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3215503Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3216312Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3217062Z return mod(**inputs) 2025-08-14T21:57:01.3217767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3218469Z outputs = self.mobilebert( 2025-08-14T21:57:01.3219388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3220349Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3221424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3222243Z layer_outputs = layer_module( 2025-08-14T21:57:01.3222741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.3223313Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.3223876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3224438Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3224653Z 2025-08-14T21:57:01.3224752Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3225050Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3225501Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3225929Z return mod(**inputs) 2025-08-14T21:57:01.3226421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3226937Z outputs = self.mobilebert( 2025-08-14T21:57:01.3227435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3227956Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3228824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3229775Z layer_outputs = layer_module( 2025-08-14T21:57:01.3230705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3231949Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3232791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.3233360Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.3233934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3234470Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3234658Z 2025-08-14T21:57:01.3234765Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3235051Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3239748Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3240198Z return mod(**inputs) 2025-08-14T21:57:01.3240691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3241299Z outputs = self.mobilebert( 2025-08-14T21:57:01.3241803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3242326Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3242829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3243340Z layer_outputs = layer_module( 2025-08-14T21:57:01.3243845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3244552Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3245191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.3245771Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.3246367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.3246931Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3247508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3248052Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3248237Z 2025-08-14T21:57:01.3248343Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3248628Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3249452Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3249863Z return mod(**inputs) 2025-08-14T21:57:01.3250455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3251002Z outputs = self.mobilebert( 2025-08-14T21:57:01.3251498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3252009Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3252556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3253144Z layer_outputs = layer_module( 2025-08-14T21:57:01.3253640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.3254269Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.3254934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.3255496Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.3256058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.3256597Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.3257123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3257672Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3257855Z 2025-08-14T21:57:01.3257964Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3258213Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3258465Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3258718Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3258964Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3259207Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3259459Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3259706Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3259946Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3260191Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3260475Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3260914Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3261322Z return mod(**inputs) 2025-08-14T21:57:01.3261814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3262325Z outputs = self.mobilebert( 2025-08-14T21:57:01.3262860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3263379Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3263925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3264433Z layer_outputs = layer_module( 2025-08-14T21:57:01.3269122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.3269675Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.3270216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.3270797Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.3271379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.3271971Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3272552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3273084Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3273279Z 2025-08-14T21:57:01.3273377Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3273664Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3274105Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3274506Z return mod(**inputs) 2025-08-14T21:57:01.3275041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3292310Z outputs = self.mobilebert( 2025-08-14T21:57:01.3292935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3297945Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3298493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3299023Z layer_outputs = layer_module( 2025-08-14T21:57:01.3299542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3300088Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3300639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3301215Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3301780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3302336Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3302555Z 2025-08-14T21:57:01.3302659Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3302961Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3303417Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3303823Z return mod(**inputs) 2025-08-14T21:57:01.3304317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3304839Z outputs = self.mobilebert( 2025-08-14T21:57:01.3305337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3305851Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3306403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3306922Z layer_outputs = layer_module( 2025-08-14T21:57:01.3307417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3308057Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3308655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3309240Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3309812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3310395Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3310975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3311507Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3311700Z 2025-08-14T21:57:01.3311801Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3312098Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3312589Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3313011Z return mod(**inputs) 2025-08-14T21:57:01.3313503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3314025Z outputs = self.mobilebert( 2025-08-14T21:57:01.3314549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3315063Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3315569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3316102Z layer_outputs = layer_module( 2025-08-14T21:57:01.3316594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3317136Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3317676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3318233Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3318781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3319341Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3319552Z 2025-08-14T21:57:01.3319665Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3319949Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3320398Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3320801Z return mod(**inputs) 2025-08-14T21:57:01.3321384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3321903Z outputs = self.mobilebert( 2025-08-14T21:57:01.3322408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3331314Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3331994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3332672Z layer_outputs = layer_module( 2025-08-14T21:57:01.3333368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3334117Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3334652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3335275Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3335857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3336444Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3339218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3339764Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3339958Z 2025-08-14T21:57:01.3340060Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3340351Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3340789Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3341192Z return mod(**inputs) 2025-08-14T21:57:01.3341741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3342257Z outputs = self.mobilebert( 2025-08-14T21:57:01.3342760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3343275Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3343808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3344310Z layer_outputs = layer_module( 2025-08-14T21:57:01.3344819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3345386Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3345922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3346488Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3347049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3347608Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3347820Z 2025-08-14T21:57:01.3347925Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3348212Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3348663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3349457Z return mod(**inputs) 2025-08-14T21:57:01.3349942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3350463Z outputs = self.mobilebert( 2025-08-14T21:57:01.3350961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3351542Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3352106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3352617Z layer_outputs = layer_module( 2025-08-14T21:57:01.3353116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3353644Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3354257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3354836Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3355454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3356026Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3356603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3357152Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3357339Z 2025-08-14T21:57:01.3357453Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3357738Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3358192Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3358605Z return mod(**inputs) 2025-08-14T21:57:01.3359088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3359605Z outputs = self.mobilebert( 2025-08-14T21:57:01.3360101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3360620Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3361204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3361725Z layer_outputs = layer_module( 2025-08-14T21:57:01.3362270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.3362839Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.3363421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3364023Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3364231Z 2025-08-14T21:57:01.3364341Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3364630Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3365084Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3365491Z return mod(**inputs) 2025-08-14T21:57:01.3370149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3370727Z outputs = self.mobilebert( 2025-08-14T21:57:01.3371229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3371755Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3372254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3372769Z layer_outputs = layer_module( 2025-08-14T21:57:01.3373274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3373910Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3374530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.3375120Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.3375695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3376262Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3376451Z 2025-08-14T21:57:01.3376551Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3376837Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3377279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3377700Z return mod(**inputs) 2025-08-14T21:57:01.3378180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3378693Z outputs = self.mobilebert( 2025-08-14T21:57:01.3379181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3379686Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3380187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3380770Z layer_outputs = layer_module( 2025-08-14T21:57:01.3381321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3381931Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3382554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.3383129Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.3383700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.3384288Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3384856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3385388Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3385602Z 2025-08-14T21:57:01.3385699Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3385988Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3386426Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3386838Z return mod(**inputs) 2025-08-14T21:57:01.3387318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3387831Z outputs = self.mobilebert( 2025-08-14T21:57:01.3388325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3388837Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3389341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3389853Z layer_outputs = layer_module( 2025-08-14T21:57:01.3390350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.3390966Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.3391594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.3392152Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.3392705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.3393231Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.3393788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3394320Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3394506Z 2025-08-14T21:57:01.3394610Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3394857Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3399353Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3399668Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3399918Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3400174Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3400426Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3400668Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3400925Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3401233Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3401508Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3401955Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3402363Z return mod(**inputs) 2025-08-14T21:57:01.3402856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3403365Z outputs = self.mobilebert( 2025-08-14T21:57:01.3403924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3404438Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3404948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3405455Z layer_outputs = layer_module( 2025-08-14T21:57:01.3405950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.3406507Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.3407022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.3407620Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.3408191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.3408777Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3409361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3410011Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3410196Z 2025-08-14T21:57:01.3410299Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3410583Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3411026Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3411430Z return mod(**inputs) 2025-08-14T21:57:01.3411908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3412475Z outputs = self.mobilebert( 2025-08-14T21:57:01.3412967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3413477Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3413975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3414486Z layer_outputs = layer_module( 2025-08-14T21:57:01.3414979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3415513Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3416088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3416647Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3417205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3417784Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3417999Z 2025-08-14T21:57:01.3418097Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3418381Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3418821Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3419221Z return mod(**inputs) 2025-08-14T21:57:01.3419703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3420225Z outputs = self.mobilebert( 2025-08-14T21:57:01.3420707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3421220Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3421724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3422240Z layer_outputs = layer_module( 2025-08-14T21:57:01.3422726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3423264Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3423800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3428649Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3429228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3429830Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3430410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3430936Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3431129Z 2025-08-14T21:57:01.3431226Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3431511Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3431952Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3432348Z return mod(**inputs) 2025-08-14T21:57:01.3432836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3433359Z outputs = self.mobilebert( 2025-08-14T21:57:01.3433846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3434364Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3434870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3435388Z layer_outputs = layer_module( 2025-08-14T21:57:01.3435876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3436410Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3436950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3437506Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3438073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3438700Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3438934Z 2025-08-14T21:57:01.3439048Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3439352Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3439791Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3440198Z return mod(**inputs) 2025-08-14T21:57:01.3440682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3441263Z outputs = self.mobilebert( 2025-08-14T21:57:01.3441761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3442277Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3442780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3443343Z layer_outputs = layer_module( 2025-08-14T21:57:01.3443841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3444382Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3444913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3445484Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3446057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3446649Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3447213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3447768Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3447963Z 2025-08-14T21:57:01.3448065Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3448352Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3449160Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3449565Z return mod(**inputs) 2025-08-14T21:57:01.3450056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3450566Z outputs = self.mobilebert( 2025-08-14T21:57:01.3451056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3451565Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3452069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3452576Z layer_outputs = layer_module( 2025-08-14T21:57:01.3461389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3462117Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3462830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3463575Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3464151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3464717Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3464986Z 2025-08-14T21:57:01.3465087Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3465381Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3465820Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3466222Z return mod(**inputs) 2025-08-14T21:57:01.3466726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3467246Z outputs = self.mobilebert( 2025-08-14T21:57:01.3467813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3468370Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3468878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3469389Z layer_outputs = layer_module( 2025-08-14T21:57:01.3469890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3470421Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3470952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3471523Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3472141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3472702Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3473303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3473834Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3474023Z 2025-08-14T21:57:01.3474129Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3474440Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3474880Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3475281Z return mod(**inputs) 2025-08-14T21:57:01.3475755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3476274Z outputs = self.mobilebert( 2025-08-14T21:57:01.3476760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3477264Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3477758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3478264Z layer_outputs = layer_module( 2025-08-14T21:57:01.3478756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.3479320Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.3479890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3480451Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3480658Z 2025-08-14T21:57:01.3480763Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3481095Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3481551Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3482031Z return mod(**inputs) 2025-08-14T21:57:01.3482596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3483117Z outputs = self.mobilebert( 2025-08-14T21:57:01.3483619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3484127Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3484644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3485155Z layer_outputs = layer_module( 2025-08-14T21:57:01.3485653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3486270Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3486890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.3487462Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.3488036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3488569Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3488752Z 2025-08-14T21:57:01.3488853Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3489144Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3489588Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3489979Z return mod(**inputs) 2025-08-14T21:57:01.3490464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3491003Z outputs = self.mobilebert( 2025-08-14T21:57:01.3491502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3492044Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3492547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3493057Z layer_outputs = layer_module( 2025-08-14T21:57:01.3493548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3494165Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3494784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.3495355Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.3495918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.3502874Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3503451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3503980Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3504164Z 2025-08-14T21:57:01.3504265Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3504556Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3504999Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3505405Z return mod(**inputs) 2025-08-14T21:57:01.3505883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3506397Z outputs = self.mobilebert( 2025-08-14T21:57:01.3506923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3507427Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3507926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3508430Z layer_outputs = layer_module( 2025-08-14T21:57:01.3508952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.3509568Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.3510196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.3510753Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.3511385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.3511954Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.3512478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3513010Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3513197Z 2025-08-14T21:57:01.3513296Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3513560Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3513807Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3514060Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3514297Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3514543Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3514814Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3515052Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3515300Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3515550Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3515888Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3516331Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3516736Z return mod(**inputs) 2025-08-14T21:57:01.3517223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3517734Z outputs = self.mobilebert( 2025-08-14T21:57:01.3518228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3518754Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3519252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3519776Z layer_outputs = layer_module( 2025-08-14T21:57:01.3520277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.3520811Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.3521404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.3521984Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.3522563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.3523141Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3523712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3524256Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3524439Z 2025-08-14T21:57:01.3524570Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3524856Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3525302Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3529960Z return mod(**inputs) 2025-08-14T21:57:01.3530511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3531020Z outputs = self.mobilebert( 2025-08-14T21:57:01.3531521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3532034Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3532534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3533038Z layer_outputs = layer_module( 2025-08-14T21:57:01.3533542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3534083Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3534658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3535218Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3535776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3536329Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3536539Z 2025-08-14T21:57:01.3536663Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3536949Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3537392Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3537788Z return mod(**inputs) 2025-08-14T21:57:01.3538296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3538811Z outputs = self.mobilebert( 2025-08-14T21:57:01.3539306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3539812Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3540406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3540956Z layer_outputs = layer_module( 2025-08-14T21:57:01.3541460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3542001Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3542534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3543113Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3543685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3544253Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3544823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3545358Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3545546Z 2025-08-14T21:57:01.3545649Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3545938Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3546412Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3546816Z return mod(**inputs) 2025-08-14T21:57:01.3547299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3547811Z outputs = self.mobilebert( 2025-08-14T21:57:01.3548326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3549216Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3549726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3550232Z layer_outputs = layer_module( 2025-08-14T21:57:01.3550740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3551276Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3551815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3552380Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3552939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3553487Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3553707Z 2025-08-14T21:57:01.3553804Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3554086Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3558688Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3559168Z return mod(**inputs) 2025-08-14T21:57:01.3559655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3560170Z outputs = self.mobilebert( 2025-08-14T21:57:01.3560684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3561266Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3561777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3562279Z layer_outputs = layer_module( 2025-08-14T21:57:01.3562776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3563314Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3563849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3564417Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3564991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3565563Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3566134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3566660Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3566852Z 2025-08-14T21:57:01.3566953Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3567232Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3567671Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3568070Z return mod(**inputs) 2025-08-14T21:57:01.3568556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3569179Z outputs = self.mobilebert( 2025-08-14T21:57:01.3569728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3570239Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3570770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3571286Z layer_outputs = layer_module( 2025-08-14T21:57:01.3571782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3572320Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3572855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3573407Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3573962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3574513Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3574722Z 2025-08-14T21:57:01.3574828Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3575107Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3575562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3576005Z return mod(**inputs) 2025-08-14T21:57:01.3576488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3577035Z outputs = self.mobilebert( 2025-08-14T21:57:01.3577528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3578039Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3578554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3579062Z layer_outputs = layer_module( 2025-08-14T21:57:01.3579558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3580094Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3580615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3581195Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3581772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3582339Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3582900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3583429Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3587852Z 2025-08-14T21:57:01.3587957Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3588235Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3588674Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3589072Z return mod(**inputs) 2025-08-14T21:57:01.3589556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3590067Z outputs = self.mobilebert( 2025-08-14T21:57:01.3590558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3591098Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3591593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3592101Z layer_outputs = layer_module( 2025-08-14T21:57:01.3592625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.3593197Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.3593757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3594313Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3594529Z 2025-08-14T21:57:01.3594628Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3595542Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3595981Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3596384Z return mod(**inputs) 2025-08-14T21:57:01.3596873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3597380Z outputs = self.mobilebert( 2025-08-14T21:57:01.3597879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3598508Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3599017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3599520Z layer_outputs = layer_module( 2025-08-14T21:57:01.3600059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3600677Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3601392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.3601957Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.3602545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3603125Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3603312Z 2025-08-14T21:57:01.3603423Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3603706Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3604147Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3604548Z return mod(**inputs) 2025-08-14T21:57:01.3605027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3605540Z outputs = self.mobilebert( 2025-08-14T21:57:01.3606033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3606544Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3607043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3607551Z layer_outputs = layer_module( 2025-08-14T21:57:01.3608045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3608654Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3609300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.3609871Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.3610446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.3611008Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3611595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3612125Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3612308Z 2025-08-14T21:57:01.3612409Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3621137Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3621711Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3622232Z return mod(**inputs) 2025-08-14T21:57:01.3622876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3623457Z outputs = self.mobilebert( 2025-08-14T21:57:01.3623950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3624463Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3624961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3625472Z layer_outputs = layer_module( 2025-08-14T21:57:01.3625972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.3626625Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.3629369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.3629950Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.3630503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.3631029Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.3631585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3632144Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3632327Z 2025-08-14T21:57:01.3632435Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3632681Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3632931Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3633179Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3633415Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3633670Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3633924Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3634165Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3634404Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3634647Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3634934Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3635188Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3635272Z return mod(**inputs) 2025-08-14T21:57:01.3635641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3635731Z outputs = self.mobilebert( 2025-08-14T21:57:01.3636096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3636218Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3636569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3636666Z layer_outputs = layer_module( 2025-08-14T21:57:01.3637061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.3637168Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.3637524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.3637677Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.3638030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.3638188Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3638535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3638653Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3638666Z 2025-08-14T21:57:01.3638760Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3638895Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3639143Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3639224Z return mod(**inputs) 2025-08-14T21:57:01.3639586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3639674Z outputs = self.mobilebert( 2025-08-14T21:57:01.3640044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3640146Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3640500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3640618Z layer_outputs = layer_module( 2025-08-14T21:57:01.3640968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3641174Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3641610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3641752Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3642167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3642306Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3642321Z 2025-08-14T21:57:01.3642418Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3642552Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3642800Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3642882Z return mod(**inputs) 2025-08-14T21:57:01.3643250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3643342Z outputs = self.mobilebert( 2025-08-14T21:57:01.3643698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3643790Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3644140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3644234Z layer_outputs = layer_module( 2025-08-14T21:57:01.3644610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3644735Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3645114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3645271Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3645627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3645779Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3646130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3646258Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3646273Z 2025-08-14T21:57:01.3646368Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3646503Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3646748Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3646829Z return mod(**inputs) 2025-08-14T21:57:01.3647197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3647285Z outputs = self.mobilebert( 2025-08-14T21:57:01.3647642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3647732Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3648102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3648205Z layer_outputs = layer_module( 2025-08-14T21:57:01.3648559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3649051Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3649416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3649556Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3649910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3650049Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3650065Z 2025-08-14T21:57:01.3650162Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3650295Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3650547Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3650642Z return mod(**inputs) 2025-08-14T21:57:01.3651001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3651090Z outputs = self.mobilebert( 2025-08-14T21:57:01.3651445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3651535Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3651882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3651980Z layer_outputs = layer_module( 2025-08-14T21:57:01.3652330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3652456Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3652858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3653018Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3653403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3653554Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3653913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3654025Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3654040Z 2025-08-14T21:57:01.3654134Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3654269Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3654518Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3654603Z return mod(**inputs) 2025-08-14T21:57:01.3654970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3655064Z outputs = self.mobilebert( 2025-08-14T21:57:01.3655432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3655526Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3655877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3660167Z layer_outputs = layer_module( 2025-08-14T21:57:01.3660603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3660732Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3661078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3661251Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3661612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3661749Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3661762Z 2025-08-14T21:57:01.3661858Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3661994Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3662241Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3662332Z return mod(**inputs) 2025-08-14T21:57:01.3662690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3662779Z outputs = self.mobilebert( 2025-08-14T21:57:01.3663135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3663224Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3663572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3663664Z layer_outputs = layer_module( 2025-08-14T21:57:01.3664015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3664137Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3664483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3664662Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3665014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3665164Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3665540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3665650Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3665663Z 2025-08-14T21:57:01.3665762Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3665899Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3666145Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3666237Z return mod(**inputs) 2025-08-14T21:57:01.3666593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3666681Z outputs = self.mobilebert( 2025-08-14T21:57:01.3667038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3667128Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3667475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3667572Z layer_outputs = layer_module( 2025-08-14T21:57:01.3667919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.3668075Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.3668453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3668592Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3668627Z 2025-08-14T21:57:01.3668730Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3668855Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3669109Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3669192Z return mod(**inputs) 2025-08-14T21:57:01.3669547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3669642Z outputs = self.mobilebert( 2025-08-14T21:57:01.3669991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3670083Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3670510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3670604Z layer_outputs = layer_module( 2025-08-14T21:57:01.3671014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3671214Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3671565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.3671723Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.3672073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3672195Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3672208Z 2025-08-14T21:57:01.3672305Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3672432Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3672716Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3672801Z return mod(**inputs) 2025-08-14T21:57:01.3673157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3673276Z outputs = self.mobilebert( 2025-08-14T21:57:01.3673627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3673727Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3674075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3674165Z layer_outputs = layer_module( 2025-08-14T21:57:01.3674521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3674717Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3675075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.3675227Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.3675577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.3675736Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3676083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3676215Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3676235Z 2025-08-14T21:57:01.3676330Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3676459Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3676734Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3676815Z return mod(**inputs) 2025-08-14T21:57:01.3677171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3677267Z outputs = self.mobilebert( 2025-08-14T21:57:01.3677617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3677715Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3678062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3678152Z layer_outputs = layer_module( 2025-08-14T21:57:01.3678508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.3678710Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.3679057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.3679204Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.3679550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.3679665Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.3680011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3680122Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3680134Z 2025-08-14T21:57:01.3680237Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3680366Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3680468Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3680564Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3680660Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3680760Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3680871Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3680965Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3681148Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3681259Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3681385Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3681644Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3681726Z return mod(**inputs) 2025-08-14T21:57:01.3682089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3682179Z outputs = self.mobilebert( 2025-08-14T21:57:01.3682533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3682632Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3682980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3683067Z layer_outputs = layer_module( 2025-08-14T21:57:01.3683426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.3683532Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.3683921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.3684071Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.3684417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.3684595Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3689176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3689300Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3689313Z 2025-08-14T21:57:01.3689406Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3689544Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3689824Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3689907Z return mod(**inputs) 2025-08-14T21:57:01.3690263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3690357Z outputs = self.mobilebert( 2025-08-14T21:57:01.3690709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3690804Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3691153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3691244Z layer_outputs = layer_module( 2025-08-14T21:57:01.3691598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3691714Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3692074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3692212Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3692590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3692739Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3692753Z 2025-08-14T21:57:01.3692848Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3692996Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3693251Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3693331Z return mod(**inputs) 2025-08-14T21:57:01.3693700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3693822Z outputs = self.mobilebert( 2025-08-14T21:57:01.3694173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3694275Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3694628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3694725Z layer_outputs = layer_module( 2025-08-14T21:57:01.3695081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3695197Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3695550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3695703Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3696084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3696244Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3696591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3696732Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3696745Z 2025-08-14T21:57:01.3696840Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3696973Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3697230Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3697312Z return mod(**inputs) 2025-08-14T21:57:01.3697674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3697764Z outputs = self.mobilebert( 2025-08-14T21:57:01.3698112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3698210Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3698562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3698654Z layer_outputs = layer_module( 2025-08-14T21:57:01.3699014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3699130Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3699560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3699698Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3700107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3700274Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3700288Z 2025-08-14T21:57:01.3700390Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3700521Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3700766Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3700847Z return mod(**inputs) 2025-08-14T21:57:01.3701231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3701319Z outputs = self.mobilebert( 2025-08-14T21:57:01.3710201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3710311Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3710673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3710771Z layer_outputs = layer_module( 2025-08-14T21:57:01.3711134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3711256Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3711611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3711777Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3712130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3712290Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3712729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3712848Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3712862Z 2025-08-14T21:57:01.3712998Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3713130Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3713381Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3713476Z return mod(**inputs) 2025-08-14T21:57:01.3713837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3718329Z outputs = self.mobilebert( 2025-08-14T21:57:01.3718692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3718789Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3719150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3719243Z layer_outputs = layer_module( 2025-08-14T21:57:01.3719597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3719722Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3720075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3720221Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3720569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3720707Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3720722Z 2025-08-14T21:57:01.3720831Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3720963Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3721326Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3721413Z return mod(**inputs) 2025-08-14T21:57:01.3721770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3721872Z outputs = self.mobilebert( 2025-08-14T21:57:01.3722246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3722342Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3722701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3722793Z layer_outputs = layer_module( 2025-08-14T21:57:01.3723152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3723270Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3723623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3723792Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3724140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3724299Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3724651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3724762Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3724798Z 2025-08-14T21:57:01.3724906Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3725034Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3725293Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3725415Z return mod(**inputs) 2025-08-14T21:57:01.3725771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3725868Z outputs = self.mobilebert( 2025-08-14T21:57:01.3726221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3726315Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3726671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3726761Z layer_outputs = layer_module( 2025-08-14T21:57:01.3727115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.3727266Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.3727616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3727758Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3727771Z 2025-08-14T21:57:01.3727871Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3728005Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3728252Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3728335Z return mod(**inputs) 2025-08-14T21:57:01.3728782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3728884Z outputs = self.mobilebert( 2025-08-14T21:57:01.3729306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3729410Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3729761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3729862Z layer_outputs = layer_module( 2025-08-14T21:57:01.3730233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3730430Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3730793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.3730946Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.3731303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3731415Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3731429Z 2025-08-14T21:57:01.3731527Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3731661Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3731915Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3731997Z return mod(**inputs) 2025-08-14T21:57:01.3732362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3732451Z outputs = self.mobilebert( 2025-08-14T21:57:01.3732810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3732923Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3733275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3733391Z layer_outputs = layer_module( 2025-08-14T21:57:01.3733739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3733938Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3734293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.3734448Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.3734805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.3734957Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3735367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3735493Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3735505Z 2025-08-14T21:57:01.3735602Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3735739Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3735988Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3736071Z return mod(**inputs) 2025-08-14T21:57:01.3736437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3736527Z outputs = self.mobilebert( 2025-08-14T21:57:01.3736876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3736977Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3737352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3737453Z layer_outputs = layer_module( 2025-08-14T21:57:01.3737802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.3738026Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.3738385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.3738522Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.3738877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.3738987Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.3739337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3739458Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3739471Z 2025-08-14T21:57:01.3739569Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3739672Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3739769Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3739864Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3739971Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3740064Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3740156Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3740256Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3740348Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3740467Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3740604Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3740854Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3740971Z return mod(**inputs) 2025-08-14T21:57:01.3741327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3741416Z outputs = self.mobilebert( 2025-08-14T21:57:01.3741774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3741865Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3742218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3742316Z layer_outputs = layer_module( 2025-08-14T21:57:01.3742666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.3742785Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.3747325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.3747487Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.3747852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.3748005Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3748365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3748477Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3748491Z 2025-08-14T21:57:01.3748587Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3749105Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3749420Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3749507Z return mod(**inputs) 2025-08-14T21:57:01.3749874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3749967Z outputs = self.mobilebert( 2025-08-14T21:57:01.3750353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3750444Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3750795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3750894Z layer_outputs = layer_module( 2025-08-14T21:57:01.3751250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3751377Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3751728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3751869Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3752228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3752366Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3752378Z 2025-08-14T21:57:01.3752473Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3752615Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3752865Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3752986Z return mod(**inputs) 2025-08-14T21:57:01.3753346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3753441Z outputs = self.mobilebert( 2025-08-14T21:57:01.3753826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3753916Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3754273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3754368Z layer_outputs = layer_module( 2025-08-14T21:57:01.3754716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3754840Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3755194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3755349Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3755702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3755855Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3756211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3756321Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3756333Z 2025-08-14T21:57:01.3756429Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3756565Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3756810Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3756902Z return mod(**inputs) 2025-08-14T21:57:01.3757277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3757367Z outputs = self.mobilebert( 2025-08-14T21:57:01.3757808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3757919Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3758327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3758425Z layer_outputs = layer_module( 2025-08-14T21:57:01.3758771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3758895Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3759247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3759390Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3759748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3759885Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3759898Z 2025-08-14T21:57:01.3760005Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3760138Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3760390Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3760483Z return mod(**inputs) 2025-08-14T21:57:01.3760841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3760964Z outputs = self.mobilebert( 2025-08-14T21:57:01.3761406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3761504Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3761883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3761979Z layer_outputs = layer_module( 2025-08-14T21:57:01.3762382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3762508Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3762855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3763018Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3763369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3763521Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3763884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3764000Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3764012Z 2025-08-14T21:57:01.3764108Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3764245Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3764493Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3764583Z return mod(**inputs) 2025-08-14T21:57:01.3764940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3765035Z outputs = self.mobilebert( 2025-08-14T21:57:01.3765418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3765509Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3765865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3765953Z layer_outputs = layer_module( 2025-08-14T21:57:01.3766323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3766452Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3766801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3766939Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3767300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3767438Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3767453Z 2025-08-14T21:57:01.3767558Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3767684Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3767932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3768025Z return mod(**inputs) 2025-08-14T21:57:01.3768386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3768475Z outputs = self.mobilebert( 2025-08-14T21:57:01.3768838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3768949Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3769301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3769399Z layer_outputs = layer_module( 2025-08-14T21:57:01.3769782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3769901Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3770251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3770411Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3770758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3770907Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3771266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3771381Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3771395Z 2025-08-14T21:57:01.3771497Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3771622Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3771870Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3780190Z return mod(**inputs) 2025-08-14T21:57:01.3780691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3780805Z outputs = self.mobilebert( 2025-08-14T21:57:01.3781281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3781381Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3781894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3781989Z layer_outputs = layer_module( 2025-08-14T21:57:01.3782478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.3782656Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.3783027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3783172Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3783185Z 2025-08-14T21:57:01.3783280Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3783408Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3783666Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3783746Z return mod(**inputs) 2025-08-14T21:57:01.3784108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3784199Z outputs = self.mobilebert( 2025-08-14T21:57:01.3784548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3784643Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3784991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3785077Z layer_outputs = layer_module( 2025-08-14T21:57:01.3785432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3785653Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3786009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.3786160Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.3788737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3788855Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3788870Z 2025-08-14T21:57:01.3788966Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3789098Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3789345Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3789425Z return mod(**inputs) 2025-08-14T21:57:01.3789787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3789876Z outputs = self.mobilebert( 2025-08-14T21:57:01.3790229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3790322Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3790671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3790762Z layer_outputs = layer_module( 2025-08-14T21:57:01.3791168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3791364Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3791717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.3791873Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.3792254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.3792407Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3792757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3792896Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3792909Z 2025-08-14T21:57:01.3793006Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3793132Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3793384Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3793465Z return mod(**inputs) 2025-08-14T21:57:01.3793827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3793918Z outputs = self.mobilebert( 2025-08-14T21:57:01.3794268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3794368Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3794718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3794812Z layer_outputs = layer_module( 2025-08-14T21:57:01.3795164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.3795366Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.3795725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.3795881Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.3796231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.3796370Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.3796718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3796840Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3796852Z 2025-08-14T21:57:01.3796947Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3797042Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3797143Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3797234Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3797335Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3797431Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3797523Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3797622Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3797714Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3797807Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3797941Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3798189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3798271Z return mod(**inputs) 2025-08-14T21:57:01.3798640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3798728Z outputs = self.mobilebert( 2025-08-14T21:57:01.3799085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3799175Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3799529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3799665Z layer_outputs = layer_module( 2025-08-14T21:57:01.3800018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.3800125Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.3800510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.3800661Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.3801186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.3801347Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3801757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3801887Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3801900Z 2025-08-14T21:57:01.3802001Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3802143Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3802394Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3802481Z return mod(**inputs) 2025-08-14T21:57:01.3802848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3802938Z outputs = self.mobilebert( 2025-08-14T21:57:01.3803289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3803414Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3803766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3803866Z layer_outputs = layer_module( 2025-08-14T21:57:01.3804237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3804352Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3804708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3804848Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3805205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3805339Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3805354Z 2025-08-14T21:57:01.3805450Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3805583Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3805831Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3805914Z return mod(**inputs) 2025-08-14T21:57:01.3806274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3806361Z outputs = self.mobilebert( 2025-08-14T21:57:01.3806718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3806808Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3807155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3807250Z layer_outputs = layer_module( 2025-08-14T21:57:01.3807599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3807738Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3808100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3808255Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3808629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3808785Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3809135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3809251Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3809266Z 2025-08-14T21:57:01.3809361Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3809490Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3809749Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3809830Z return mod(**inputs) 2025-08-14T21:57:01.3810186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3810282Z outputs = self.mobilebert( 2025-08-14T21:57:01.3810632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3810725Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3811073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3811949Z layer_outputs = layer_module( 2025-08-14T21:57:01.3812307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3812425Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3812803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3812938Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3813287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3813431Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3813444Z 2025-08-14T21:57:01.3813542Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3813676Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3813926Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3814010Z return mod(**inputs) 2025-08-14T21:57:01.3814371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3814459Z outputs = self.mobilebert( 2025-08-14T21:57:01.3814806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3814905Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3815260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3815354Z layer_outputs = layer_module( 2025-08-14T21:57:01.3820013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3820182Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3820539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3820717Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3821065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3821222Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3821589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3821709Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3821722Z 2025-08-14T21:57:01.3821818Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3821948Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3822205Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3822289Z return mod(**inputs) 2025-08-14T21:57:01.3822653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3822742Z outputs = self.mobilebert( 2025-08-14T21:57:01.3823088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3823184Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3823538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3823626Z layer_outputs = layer_module( 2025-08-14T21:57:01.3823984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3824099Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3824475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3824615Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3824984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3825125Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3825137Z 2025-08-14T21:57:01.3825234Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3825368Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3825614Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3825695Z return mod(**inputs) 2025-08-14T21:57:01.3826057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3826148Z outputs = self.mobilebert( 2025-08-14T21:57:01.3826497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3826598Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3826945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3827040Z layer_outputs = layer_module( 2025-08-14T21:57:01.3827392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3827513Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3827874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3828025Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3828391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3828582Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3828933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3829052Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3829064Z 2025-08-14T21:57:01.3829180Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3829308Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3829565Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3829645Z return mod(**inputs) 2025-08-14T21:57:01.3830077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3830169Z outputs = self.mobilebert( 2025-08-14T21:57:01.3830578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3830681Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3831031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3831126Z layer_outputs = layer_module( 2025-08-14T21:57:01.3831476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.3831624Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.3831979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3832147Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3832160Z 2025-08-14T21:57:01.3832268Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3832397Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3832642Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3832764Z return mod(**inputs) 2025-08-14T21:57:01.3833126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3833220Z outputs = self.mobilebert( 2025-08-14T21:57:01.3833583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3833674Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3834035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3834125Z layer_outputs = layer_module( 2025-08-14T21:57:01.3834476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3834683Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3835033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.3835186Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.3835540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3835653Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3835665Z 2025-08-14T21:57:01.3835767Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3835890Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3836139Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3836226Z return mod(**inputs) 2025-08-14T21:57:01.3836612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3836706Z outputs = self.mobilebert( 2025-08-14T21:57:01.3837054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3837163Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3837518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3837604Z layer_outputs = layer_module( 2025-08-14T21:57:01.3837950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3838153Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3838502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.3838661Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.3839011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.3839162Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3839516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3839625Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3839637Z 2025-08-14T21:57:01.3839738Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3839887Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3840132Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3840218Z return mod(**inputs) 2025-08-14T21:57:01.3840572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3840679Z outputs = self.mobilebert( 2025-08-14T21:57:01.3841101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3841226Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3841580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3841667Z layer_outputs = layer_module( 2025-08-14T21:57:01.3842013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.3842220Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.3842572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.3842713Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.3843060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.3843170Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.3843525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3843635Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3843647Z 2025-08-14T21:57:01.3843752Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3843849Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3843941Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3844039Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3844154Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3844246Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3844347Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3848646Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3849060Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3849161Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3849338Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3849597Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3849680Z return mod(**inputs) 2025-08-14T21:57:01.3850037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3850133Z outputs = self.mobilebert( 2025-08-14T21:57:01.3850485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3850575Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3850933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3851020Z layer_outputs = layer_module( 2025-08-14T21:57:01.3851376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.3851487Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.3851834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.3851992Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.3852371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.3852535Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3852953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3853135Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3853153Z 2025-08-14T21:57:01.3853294Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3853422Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3853669Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3853756Z return mod(**inputs) 2025-08-14T21:57:01.3854113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3854210Z outputs = self.mobilebert( 2025-08-14T21:57:01.3854559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3854647Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3855004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3855091Z layer_outputs = layer_module( 2025-08-14T21:57:01.3855439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3855562Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3855912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3856054Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3856403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3856581Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3856596Z 2025-08-14T21:57:01.3856706Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3856831Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3857086Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3857188Z return mod(**inputs) 2025-08-14T21:57:01.3857546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3857641Z outputs = self.mobilebert( 2025-08-14T21:57:01.3857990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3858081Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3858433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3858520Z layer_outputs = layer_module( 2025-08-14T21:57:01.3858876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3859064Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3859451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3859621Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3859968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3860126Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3860503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3860614Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3860651Z 2025-08-14T21:57:01.3860759Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3860883Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3861142Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3861229Z return mod(**inputs) 2025-08-14T21:57:01.3861584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3861678Z outputs = self.mobilebert( 2025-08-14T21:57:01.3862079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3862170Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3862521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3862609Z layer_outputs = layer_module( 2025-08-14T21:57:01.3862962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3863075Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3863421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3863564Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3863912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3864051Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3864066Z 2025-08-14T21:57:01.3864163Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3864287Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3864562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3864644Z return mod(**inputs) 2025-08-14T21:57:01.3864999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3865092Z outputs = self.mobilebert( 2025-08-14T21:57:01.3865463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3865556Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3865901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3865991Z layer_outputs = layer_module( 2025-08-14T21:57:01.3866348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3866466Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3866815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3866978Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3867330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3867483Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3867828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3867937Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3867972Z 2025-08-14T21:57:01.3868073Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3868197Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3868455Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3868573Z return mod(**inputs) 2025-08-14T21:57:01.3868929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3869028Z outputs = self.mobilebert( 2025-08-14T21:57:01.3869379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3869467Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3869822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3869913Z layer_outputs = layer_module( 2025-08-14T21:57:01.3870269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3870387Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3870737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3870879Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3871232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3871374Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3871386Z 2025-08-14T21:57:01.3871483Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3871608Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3871866Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3871946Z return mod(**inputs) 2025-08-14T21:57:01.3872322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3872419Z outputs = self.mobilebert( 2025-08-14T21:57:01.3872766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3872859Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3873246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3873338Z layer_outputs = layer_module( 2025-08-14T21:57:01.3877889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3878016Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3878372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3878528Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3878875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3879030Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3879379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3879497Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3879510Z 2025-08-14T21:57:01.3879605Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3879730Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3880011Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3880091Z return mod(**inputs) 2025-08-14T21:57:01.3880449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3880569Z outputs = self.mobilebert( 2025-08-14T21:57:01.3880916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3881012Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3881434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3881524Z layer_outputs = layer_module( 2025-08-14T21:57:01.3881879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.3882029Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.3882377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3882526Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3882540Z 2025-08-14T21:57:01.3882636Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3882769Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3883018Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3883098Z return mod(**inputs) 2025-08-14T21:57:01.3883460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3883548Z outputs = self.mobilebert( 2025-08-14T21:57:01.3883912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3884010Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3884382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3884482Z layer_outputs = layer_module( 2025-08-14T21:57:01.3884832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3885052Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3885411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.3885563Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.3885919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3886031Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3886043Z 2025-08-14T21:57:01.3886142Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3886276Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3886527Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3886614Z return mod(**inputs) 2025-08-14T21:57:01.3886974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3887061Z outputs = self.mobilebert( 2025-08-14T21:57:01.3887415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3887502Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3887850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3888039Z layer_outputs = layer_module( 2025-08-14T21:57:01.3888409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3888658Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3889006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.3889160Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.3889513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.3889663Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3890016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3890131Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3890143Z 2025-08-14T21:57:01.3890240Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3890372Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3890623Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3890705Z return mod(**inputs) 2025-08-14T21:57:01.3891066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3891154Z outputs = self.mobilebert( 2025-08-14T21:57:01.3891510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3891597Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3891951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3892050Z layer_outputs = layer_module( 2025-08-14T21:57:01.3892424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.3892679Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.3893057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.3893195Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.3893552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.3893659Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.3894012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3894125Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3894138Z 2025-08-14T21:57:01.3894235Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3894337Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3894428Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3894520Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3894619Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3894708Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3894801Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3894901Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3894993Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3895091Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3895217Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3895463Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3895571Z return mod(**inputs) 2025-08-14T21:57:01.3895928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3896038Z outputs = self.mobilebert( 2025-08-14T21:57:01.3896395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3896486Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3896844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3896932Z layer_outputs = layer_module( 2025-08-14T21:57:01.3897280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.3897393Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.3897748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.3897907Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.3898254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.3898407Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3898763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3898874Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3898887Z 2025-08-14T21:57:01.3898980Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3899111Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3899358Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3899453Z return mod(**inputs) 2025-08-14T21:57:01.3899830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3899920Z outputs = self.mobilebert( 2025-08-14T21:57:01.3900286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3900376Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3900749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3900838Z layer_outputs = layer_module( 2025-08-14T21:57:01.3901186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3901307Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3901658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3901798Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3902156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3902291Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3902304Z 2025-08-14T21:57:01.3902414Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3910969Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3911300Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3911400Z return mod(**inputs) 2025-08-14T21:57:01.3911899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3912041Z outputs = self.mobilebert( 2025-08-14T21:57:01.3912529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3912623Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3913141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3913235Z layer_outputs = layer_module( 2025-08-14T21:57:01.3913626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3913750Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3914099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3914264Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3914615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3914767Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3915133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3915244Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3915256Z 2025-08-14T21:57:01.3915360Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3915484Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3915728Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3915814Z return mod(**inputs) 2025-08-14T21:57:01.3916170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3916259Z outputs = self.mobilebert( 2025-08-14T21:57:01.3916635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3916726Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3917146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3917239Z layer_outputs = layer_module( 2025-08-14T21:57:01.3917658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3917781Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3918130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3918273Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3918621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3918756Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3918770Z 2025-08-14T21:57:01.3918871Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3918999Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3919244Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3919333Z return mod(**inputs) 2025-08-14T21:57:01.3919688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3919781Z outputs = self.mobilebert( 2025-08-14T21:57:01.3920131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3920242Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3920601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3920688Z layer_outputs = layer_module( 2025-08-14T21:57:01.3921130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3921273Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3921625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3921783Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3922130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3922281Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3922640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3922753Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3922768Z 2025-08-14T21:57:01.3922871Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3922998Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3923246Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3923338Z return mod(**inputs) 2025-08-14T21:57:01.3923697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3923793Z outputs = self.mobilebert( 2025-08-14T21:57:01.3924145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3924240Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3924618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3924710Z layer_outputs = layer_module( 2025-08-14T21:57:01.3925061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3925179Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3925547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3925692Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3926041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3926178Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3926190Z 2025-08-14T21:57:01.3926290Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3926415Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3926671Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3926757Z return mod(**inputs) 2025-08-14T21:57:01.3927110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3927209Z outputs = self.mobilebert( 2025-08-14T21:57:01.3927561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3927647Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3928006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3928116Z layer_outputs = layer_module( 2025-08-14T21:57:01.3928472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3928587Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3928963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3929120Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3929471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3929631Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3929980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3930094Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3930106Z 2025-08-14T21:57:01.3930209Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3930337Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3930581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3930669Z return mod(**inputs) 2025-08-14T21:57:01.3931025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3931123Z outputs = self.mobilebert( 2025-08-14T21:57:01.3931552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3931642Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3932055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3932149Z layer_outputs = layer_module( 2025-08-14T21:57:01.3932522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.3932679Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.3933030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3933172Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3933205Z 2025-08-14T21:57:01.3933307Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3933432Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3933687Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3933767Z return mod(**inputs) 2025-08-14T21:57:01.3934133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3934222Z outputs = self.mobilebert( 2025-08-14T21:57:01.3934571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3934670Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3935018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3935104Z layer_outputs = layer_module( 2025-08-14T21:57:01.3935458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3935655Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3936008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.3936182Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.3936532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3936673Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3936685Z 2025-08-14T21:57:01.3936780Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3936911Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3937158Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3937239Z return mod(**inputs) 2025-08-14T21:57:01.3937601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3937687Z outputs = self.mobilebert( 2025-08-14T21:57:01.3938037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3938137Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3938488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3938583Z layer_outputs = layer_module( 2025-08-14T21:57:01.3938932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3939131Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3939484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.3939637Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.3939992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.3940147Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3940517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3940632Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3940645Z 2025-08-14T21:57:01.3940740Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3940872Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3941149Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3941235Z return mod(**inputs) 2025-08-14T21:57:01.3941601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3941687Z outputs = self.mobilebert( 2025-08-14T21:57:01.3942042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3942140Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3942488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3942584Z layer_outputs = layer_module( 2025-08-14T21:57:01.3942932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.3943133Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.3943493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.3943628Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.3943981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.3944115Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.3944465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3944631Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3944643Z 2025-08-14T21:57:01.3944740Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3944837Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3944942Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3945034Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3945132Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3945224Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3945316Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3945415Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3945514Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3945607Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3945744Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3952478Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3952571Z return mod(**inputs) 2025-08-14T21:57:01.3952945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3953034Z outputs = self.mobilebert( 2025-08-14T21:57:01.3953399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3953489Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3953838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3953946Z layer_outputs = layer_module( 2025-08-14T21:57:01.3954295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.3954460Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.3954815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.3954969Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.3955351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.3955505Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3955854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3955970Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3955985Z 2025-08-14T21:57:01.3956078Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3956220Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3956468Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3956552Z return mod(**inputs) 2025-08-14T21:57:01.3956920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3957009Z outputs = self.mobilebert( 2025-08-14T21:57:01.3957371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3957464Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3957815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3957944Z layer_outputs = layer_module( 2025-08-14T21:57:01.3958291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3958409Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3958793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3958929Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3959284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3959420Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3959433Z 2025-08-14T21:57:01.3959526Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3959657Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3959903Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3959992Z return mod(**inputs) 2025-08-14T21:57:01.3960350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3960508Z outputs = self.mobilebert( 2025-08-14T21:57:01.3960875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3961014Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3961429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3961530Z layer_outputs = layer_module( 2025-08-14T21:57:01.3961879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3962006Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3962355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3962531Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3962888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3963039Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3963416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3963526Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3963539Z 2025-08-14T21:57:01.3963632Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3963761Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3964007Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3964087Z return mod(**inputs) 2025-08-14T21:57:01.3964450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3964540Z outputs = self.mobilebert( 2025-08-14T21:57:01.3964895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3964984Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3965336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3965429Z layer_outputs = layer_module( 2025-08-14T21:57:01.3965784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3965926Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3966275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3966413Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3966791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3966926Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3966939Z 2025-08-14T21:57:01.3967036Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3967168Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3967414Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3967502Z return mod(**inputs) 2025-08-14T21:57:01.3967857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3967952Z outputs = self.mobilebert( 2025-08-14T21:57:01.3968308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3968398Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3968755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3968842Z layer_outputs = layer_module( 2025-08-14T21:57:01.3969190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3969312Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3969660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3969813Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3970170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3970344Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3970702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3970811Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3970824Z 2025-08-14T21:57:01.3970940Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3971076Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3971325Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3971416Z return mod(**inputs) 2025-08-14T21:57:01.3971774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3971866Z outputs = self.mobilebert( 2025-08-14T21:57:01.3972222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3972313Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3972659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3972753Z layer_outputs = layer_module( 2025-08-14T21:57:01.3973102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3973223Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3973570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.3973735Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.3974089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3974225Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3974262Z 2025-08-14T21:57:01.3974372Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3974496Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3974743Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3974836Z return mod(**inputs) 2025-08-14T21:57:01.3979400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3979507Z outputs = self.mobilebert( 2025-08-14T21:57:01.3979896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3979992Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3980354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3980445Z layer_outputs = layer_module( 2025-08-14T21:57:01.3980797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.3980930Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.3981281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.3981446Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.3981799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.3981951Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3982348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3982469Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3982483Z 2025-08-14T21:57:01.3982579Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3982711Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3982982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3983074Z return mod(**inputs) 2025-08-14T21:57:01.3983435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3983523Z outputs = self.mobilebert( 2025-08-14T21:57:01.3983877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3983968Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3984318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3984414Z layer_outputs = layer_module( 2025-08-14T21:57:01.3984761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.3984915Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.3985262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.3985397Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.3985410Z 2025-08-14T21:57:01.3985513Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3985636Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3985916Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3985996Z return mod(**inputs) 2025-08-14T21:57:01.3986356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3986470Z outputs = self.mobilebert( 2025-08-14T21:57:01.3986822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3986911Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3987272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3987359Z layer_outputs = layer_module( 2025-08-14T21:57:01.3987717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3987917Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3988267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.3988430Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.3988781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3988898Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3988911Z 2025-08-14T21:57:01.3989007Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3989131Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3989384Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3989533Z return mod(**inputs) 2025-08-14T21:57:01.3989910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3990033Z outputs = self.mobilebert( 2025-08-14T21:57:01.3990407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3990503Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3990852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3990961Z layer_outputs = layer_module( 2025-08-14T21:57:01.3991315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.3991513Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.3991869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.3992028Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.3992380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.3992536Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.3992884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3993005Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3993017Z 2025-08-14T21:57:01.3993115Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3993241Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3993495Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3993600Z return mod(**inputs) 2025-08-14T21:57:01.3993958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3994057Z outputs = self.mobilebert( 2025-08-14T21:57:01.3994407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3994526Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3994875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3994963Z layer_outputs = layer_module( 2025-08-14T21:57:01.3995321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.3995521Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.3995881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.3996017Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.3996369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.3996491Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.3996841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.3996951Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.3996970Z 2025-08-14T21:57:01.3997066Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3997162Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3997265Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3997356Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3997450Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3997552Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3997643Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3997755Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3997858Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3997952Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.3998079Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.3998334Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.3998445Z return mod(**inputs) 2025-08-14T21:57:01.3998811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.3998900Z outputs = self.mobilebert( 2025-08-14T21:57:01.3999249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.3999348Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.3999699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.3999796Z layer_outputs = layer_module( 2025-08-14T21:57:01.4000146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.4000252Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.4000610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.4000760Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.4001196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.4001360Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4001736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4001855Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4001889Z 2025-08-14T21:57:01.4001985Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4002110Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4002365Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4002448Z return mod(**inputs) 2025-08-14T21:57:01.4002810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4002898Z outputs = self.mobilebert( 2025-08-14T21:57:01.4003246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4003346Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4003696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4003783Z layer_outputs = layer_module( 2025-08-14T21:57:01.4008393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4008511Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4008912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4009051Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4009399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4009542Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4009557Z 2025-08-14T21:57:01.4009653Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4009785Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4010059Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4010142Z return mod(**inputs) 2025-08-14T21:57:01.4010505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4010616Z outputs = self.mobilebert( 2025-08-14T21:57:01.4010965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4011062Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4011410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4011510Z layer_outputs = layer_module( 2025-08-14T21:57:01.4011856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4011973Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4012330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4012484Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4012892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4013043Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4013388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4013528Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4013541Z 2025-08-14T21:57:01.4013636Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4013764Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4014017Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4014130Z return mod(**inputs) 2025-08-14T21:57:01.4014492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4014582Z outputs = self.mobilebert( 2025-08-14T21:57:01.4014931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4015030Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4015379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4015475Z layer_outputs = layer_module( 2025-08-14T21:57:01.4015824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4015940Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4016301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4016436Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4016786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4016930Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4016943Z 2025-08-14T21:57:01.4017040Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4017174Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4017423Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4017504Z return mod(**inputs) 2025-08-14T21:57:01.4017893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4017982Z outputs = self.mobilebert( 2025-08-14T21:57:01.4018341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4018530Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4018900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4019023Z layer_outputs = layer_module( 2025-08-14T21:57:01.4019373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4019490Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4019845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4019997Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4020354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4020502Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4020849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4020973Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4020990Z 2025-08-14T21:57:01.4021117Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4021256Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4021530Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4021611Z return mod(**inputs) 2025-08-14T21:57:01.4021973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4022083Z outputs = self.mobilebert( 2025-08-14T21:57:01.4022433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4022530Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4022880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4022977Z layer_outputs = layer_module( 2025-08-14T21:57:01.4023324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4023442Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4023802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4023938Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4024298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4024430Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4024444Z 2025-08-14T21:57:01.4024541Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4024672Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4024916Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4024996Z return mod(**inputs) 2025-08-14T21:57:01.4025365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4025452Z outputs = self.mobilebert( 2025-08-14T21:57:01.4025824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4025916Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4026263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4026355Z layer_outputs = layer_module( 2025-08-14T21:57:01.4026723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4026844Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4027193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4027346Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4027701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4027850Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4028197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4028314Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4028328Z 2025-08-14T21:57:01.4028423Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4028555Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4028799Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4028880Z return mod(**inputs) 2025-08-14T21:57:01.4029242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4029351Z outputs = self.mobilebert( 2025-08-14T21:57:01.4029706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4029816Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4030163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4030261Z layer_outputs = layer_module( 2025-08-14T21:57:01.4030611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.4030757Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.4031110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4031247Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4031259Z 2025-08-14T21:57:01.4031364Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4031492Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4031739Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4031827Z return mod(**inputs) 2025-08-14T21:57:01.4032182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4032281Z outputs = self.mobilebert( 2025-08-14T21:57:01.4032626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4032715Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4037317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4037415Z layer_outputs = layer_module( 2025-08-14T21:57:01.4037790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4037997Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4038346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.4038525Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.4038874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4038992Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4039005Z 2025-08-14T21:57:01.4039107Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4039235Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4039491Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4039575Z return mod(**inputs) 2025-08-14T21:57:01.4039937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4040031Z outputs = self.mobilebert( 2025-08-14T21:57:01.4040381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4040469Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4040826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4040913Z layer_outputs = layer_module( 2025-08-14T21:57:01.4041364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4041584Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4041933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.4042116Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.4042468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.4042628Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4042978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4043087Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4043099Z 2025-08-14T21:57:01.4043206Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4043332Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4043579Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4043672Z return mod(**inputs) 2025-08-14T21:57:01.4044034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4044130Z outputs = self.mobilebert( 2025-08-14T21:57:01.4044484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4044574Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4044932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4045020Z layer_outputs = layer_module( 2025-08-14T21:57:01.4045377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.4045599Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.4045951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.4046093Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.4046459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.4046567Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.4046923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4047033Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4047045Z 2025-08-14T21:57:01.4047150Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4047246Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4047342Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4047516Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4047611Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4047706Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4047806Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4047919Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4048042Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4048139Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4048264Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4048519Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4048602Z return mod(**inputs) 2025-08-14T21:57:01.4049212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4049351Z outputs = self.mobilebert( 2025-08-14T21:57:01.4049707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4049826Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4050178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4050267Z layer_outputs = layer_module( 2025-08-14T21:57:01.4050624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.4050729Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.4051076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.4051236Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.4051584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.4051746Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4052141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4052320Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4052337Z 2025-08-14T21:57:01.4052463Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4052589Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4052844Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4052928Z return mod(**inputs) 2025-08-14T21:57:01.4053283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4053378Z outputs = self.mobilebert( 2025-08-14T21:57:01.4053775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4053868Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4054230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4054318Z layer_outputs = layer_module( 2025-08-14T21:57:01.4054701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4054818Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4055167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4055311Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4055660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4055804Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4055818Z 2025-08-14T21:57:01.4055912Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4056038Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4056294Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4056378Z return mod(**inputs) 2025-08-14T21:57:01.4056732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4056826Z outputs = self.mobilebert( 2025-08-14T21:57:01.4057176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4057303Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4057652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4057766Z layer_outputs = layer_module( 2025-08-14T21:57:01.4058121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4058241Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4058600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4058752Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4059099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4059256Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4059608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4059717Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4059738Z 2025-08-14T21:57:01.4059835Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4059959Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4060221Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4060303Z return mod(**inputs) 2025-08-14T21:57:01.4060657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4060753Z outputs = self.mobilebert( 2025-08-14T21:57:01.4061104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4061201Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4061574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4061664Z layer_outputs = layer_module( 2025-08-14T21:57:01.4070346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4070477Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4070986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4071158Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4071635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4071805Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4071819Z 2025-08-14T21:57:01.4071923Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4072073Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4072414Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4072503Z return mod(**inputs) 2025-08-14T21:57:01.4072940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4073030Z outputs = self.mobilebert( 2025-08-14T21:57:01.4073382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4073479Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4073826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4073940Z layer_outputs = layer_module( 2025-08-14T21:57:01.4074294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4074431Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4074790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4074941Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4075289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4075446Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4075799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4075918Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4075931Z 2025-08-14T21:57:01.4076026Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4076152Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4076406Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4078657Z return mod(**inputs) 2025-08-14T21:57:01.4079014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4084196Z outputs = self.mobilebert( 2025-08-14T21:57:01.4084629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4084726Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4085087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4085189Z layer_outputs = layer_module( 2025-08-14T21:57:01.4085616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4085748Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4086101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4086249Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4086629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4086768Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4086781Z 2025-08-14T21:57:01.4086897Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4087025Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4087290Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4087374Z return mod(**inputs) 2025-08-14T21:57:01.4087734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4087838Z outputs = self.mobilebert( 2025-08-14T21:57:01.4088189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4088284Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4088639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4088729Z layer_outputs = layer_module( 2025-08-14T21:57:01.4089092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4089241Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4089595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4089784Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4090135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4090304Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4090661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4090779Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4090791Z 2025-08-14T21:57:01.4090897Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4091158Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4091441Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4091563Z return mod(**inputs) 2025-08-14T21:57:01.4091926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4092027Z outputs = self.mobilebert( 2025-08-14T21:57:01.4092381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4092474Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4092836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4092925Z layer_outputs = layer_module( 2025-08-14T21:57:01.4093282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.4093436Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.4093811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4093957Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4093970Z 2025-08-14T21:57:01.4094067Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4094193Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4094483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4094565Z return mod(**inputs) 2025-08-14T21:57:01.4094930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4095021Z outputs = self.mobilebert( 2025-08-14T21:57:01.4095369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4095468Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4095816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4095908Z layer_outputs = layer_module( 2025-08-14T21:57:01.4096267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4096468Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4096826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.4096982Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.4097329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4097472Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4097485Z 2025-08-14T21:57:01.4097584Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4097745Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4097991Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4098074Z return mod(**inputs) 2025-08-14T21:57:01.4098445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4098534Z outputs = self.mobilebert( 2025-08-14T21:57:01.4098897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4098987Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4099338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4099438Z layer_outputs = layer_module( 2025-08-14T21:57:01.4099790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4099989Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4100345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.4100502Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.4100862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.4101013Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4101366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4101487Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4101524Z 2025-08-14T21:57:01.4101620Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4101754Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4102005Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4102089Z return mod(**inputs) 2025-08-14T21:57:01.4102473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4102562Z outputs = self.mobilebert( 2025-08-14T21:57:01.4102912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4103009Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4103362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4103458Z layer_outputs = layer_module( 2025-08-14T21:57:01.4103809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.4104017Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.4104379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.4104515Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.4104875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.4104983Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.4105387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4109818Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4109834Z 2025-08-14T21:57:01.4109936Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4110094Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4110217Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4110312Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4110417Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4110512Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4110607Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4110707Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4110800Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4110893Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4111032Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4111291Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4111374Z return mod(**inputs) 2025-08-14T21:57:01.4111746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4111838Z outputs = self.mobilebert( 2025-08-14T21:57:01.4112194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4112285Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4112638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4112737Z layer_outputs = layer_module( 2025-08-14T21:57:01.4113086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.4113201Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.4113579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.4113734Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.4114096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.4114254Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4114626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4114749Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4114762Z 2025-08-14T21:57:01.4114856Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4114994Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4115245Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4115327Z return mod(**inputs) 2025-08-14T21:57:01.4115696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4115787Z outputs = self.mobilebert( 2025-08-14T21:57:01.4116147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4116239Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4116590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4116689Z layer_outputs = layer_module( 2025-08-14T21:57:01.4117036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4117199Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4117556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4117695Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4118074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4118211Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4118226Z 2025-08-14T21:57:01.4118324Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4118460Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4118709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4118803Z return mod(**inputs) 2025-08-14T21:57:01.4119160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4119253Z outputs = self.mobilebert( 2025-08-14T21:57:01.4119613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4119710Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4120137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4120235Z layer_outputs = layer_module( 2025-08-14T21:57:01.4120643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4120769Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4121211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4121370Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4121758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4121911Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4122273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4122385Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4122418Z 2025-08-14T21:57:01.4122517Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4122652Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4122900Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4122983Z return mod(**inputs) 2025-08-14T21:57:01.4123352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4123443Z outputs = self.mobilebert( 2025-08-14T21:57:01.4123806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4123898Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4124251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4124350Z layer_outputs = layer_module( 2025-08-14T21:57:01.4124703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4124830Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4125181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4125341Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4125700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4125834Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4125869Z 2025-08-14T21:57:01.4125967Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4126101Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4126355Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4126453Z return mod(**inputs) 2025-08-14T21:57:01.4126809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4126897Z outputs = self.mobilebert( 2025-08-14T21:57:01.4127254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4127348Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4127715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4127807Z layer_outputs = layer_module( 2025-08-14T21:57:01.4128161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4128289Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4128644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4128799Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4129160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4129314Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4129708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4129823Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4129836Z 2025-08-14T21:57:01.4129934Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4130070Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4130347Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4130442Z return mod(**inputs) 2025-08-14T21:57:01.4130801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4130892Z outputs = self.mobilebert( 2025-08-14T21:57:01.4131251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4131345Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4131697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4131797Z layer_outputs = layer_module( 2025-08-14T21:57:01.4132147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4132278Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4132629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4132768Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4133127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4133289Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4133301Z 2025-08-14T21:57:01.4133408Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4133536Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4133808Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4133905Z return mod(**inputs) 2025-08-14T21:57:01.4134266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4134356Z outputs = self.mobilebert( 2025-08-14T21:57:01.4138933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4139043Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4139430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4139527Z layer_outputs = layer_module( 2025-08-14T21:57:01.4139878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4140011Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4140361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4140528Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4140881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4141033Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4141399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4141516Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4141528Z 2025-08-14T21:57:01.4141626Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4141791Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4142047Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4142142Z return mod(**inputs) 2025-08-14T21:57:01.4142525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4142615Z outputs = self.mobilebert( 2025-08-14T21:57:01.4142972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4143063Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4143469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4143559Z layer_outputs = layer_module( 2025-08-14T21:57:01.4143910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.4144069Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.4144417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4144561Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4144575Z 2025-08-14T21:57:01.4144673Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4144802Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4145060Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4145142Z return mod(**inputs) 2025-08-14T21:57:01.4145526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4145621Z outputs = self.mobilebert( 2025-08-14T21:57:01.4145971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4146088Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4146437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4146529Z layer_outputs = layer_module( 2025-08-14T21:57:01.4146885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4147084Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4147437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.4147588Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.4147939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4148062Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4148074Z 2025-08-14T21:57:01.4148169Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4148295Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4148554Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4148634Z return mod(**inputs) 2025-08-14T21:57:01.4149495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4149588Z outputs = self.mobilebert( 2025-08-14T21:57:01.4149941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4150036Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4150440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4150537Z layer_outputs = layer_module( 2025-08-14T21:57:01.4150910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4151107Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4151461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.4151611Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.4151963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.4152124Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4152470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4152590Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4152603Z 2025-08-14T21:57:01.4152700Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4152826Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4153082Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4153162Z return mod(**inputs) 2025-08-14T21:57:01.4153523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4153643Z outputs = self.mobilebert( 2025-08-14T21:57:01.4153991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4154086Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4154463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4154550Z layer_outputs = layer_module( 2025-08-14T21:57:01.4154904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.4155104Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.4155457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.4155590Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.4155939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.4156052Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.4156399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4156519Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4156531Z 2025-08-14T21:57:01.4156625Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4156720Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4156819Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4156910Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4157003Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4157104Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4157199Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4157302Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4157395Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4157486Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4157641Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4157891Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4157972Z return mod(**inputs) 2025-08-14T21:57:01.4158335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4158443Z outputs = self.mobilebert( 2025-08-14T21:57:01.4158799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4158888Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4159236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4159336Z layer_outputs = layer_module( 2025-08-14T21:57:01.4159688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.4159795Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.4160157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.4160309Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.4160668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.4160818Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4161259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4161407Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4161420Z 2025-08-14T21:57:01.4161514Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4161649Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4161932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4162014Z return mod(**inputs) 2025-08-14T21:57:01.4162379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4162472Z outputs = self.mobilebert( 2025-08-14T21:57:01.4162824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4162924Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4163272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4163376Z layer_outputs = layer_module( 2025-08-14T21:57:01.4167902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4168025Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4168388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4168527Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4168888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4169025Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4169038Z 2025-08-14T21:57:01.4169140Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4169278Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4169528Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4169610Z return mod(**inputs) 2025-08-14T21:57:01.4170005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4170099Z outputs = self.mobilebert( 2025-08-14T21:57:01.4170462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4170572Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4170922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4171016Z layer_outputs = layer_module( 2025-08-14T21:57:01.4171365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4171483Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4171838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4171992Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4172344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4172497Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4172844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4172960Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4172973Z 2025-08-14T21:57:01.4173066Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4173196Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4173463Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4173542Z return mod(**inputs) 2025-08-14T21:57:01.4173905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4174492Z outputs = self.mobilebert( 2025-08-14T21:57:01.4174848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4174937Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4175284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4175377Z layer_outputs = layer_module( 2025-08-14T21:57:01.4175725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4175852Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4176212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4176353Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4176706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4176854Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4176867Z 2025-08-14T21:57:01.4176964Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4177091Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4177350Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4177432Z return mod(**inputs) 2025-08-14T21:57:01.4177796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4177887Z outputs = self.mobilebert( 2025-08-14T21:57:01.4178338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4178479Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4178837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4178950Z layer_outputs = layer_module( 2025-08-14T21:57:01.4179311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4179426Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4179781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4179937Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4180284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4180443Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4180790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4180911Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4180923Z 2025-08-14T21:57:01.4181017Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4181141Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4181393Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4181472Z return mod(**inputs) 2025-08-14T21:57:01.4181859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4181950Z outputs = self.mobilebert( 2025-08-14T21:57:01.4182298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4182418Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4182764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4182851Z layer_outputs = layer_module( 2025-08-14T21:57:01.4183206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4183323Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4183679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4183816Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4184166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4184308Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4184320Z 2025-08-14T21:57:01.4184416Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4184545Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4184855Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4184937Z return mod(**inputs) 2025-08-14T21:57:01.4185296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4185386Z outputs = self.mobilebert( 2025-08-14T21:57:01.4185731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4185827Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4186197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4186292Z layer_outputs = layer_module( 2025-08-14T21:57:01.4186639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4186773Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4187128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4187281Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4187640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4187789Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4188136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4188253Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4188266Z 2025-08-14T21:57:01.4188364Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4188490Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4188742Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4188822Z return mod(**inputs) 2025-08-14T21:57:01.4189182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4189270Z outputs = self.mobilebert( 2025-08-14T21:57:01.4189617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4189737Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4190088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4190197Z layer_outputs = layer_module( 2025-08-14T21:57:01.4190555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.4190706Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.4191060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4191197Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4191209Z 2025-08-14T21:57:01.4191306Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4191442Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4191693Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4191784Z return mod(**inputs) 2025-08-14T21:57:01.4192137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4192226Z outputs = self.mobilebert( 2025-08-14T21:57:01.4196813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4196907Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4197257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4197358Z layer_outputs = layer_module( 2025-08-14T21:57:01.4197704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4197910Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4198286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.4198440Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.4198823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4198936Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4198948Z 2025-08-14T21:57:01.4199050Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4199178Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4199429Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4199520Z return mod(**inputs) 2025-08-14T21:57:01.4199877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4199969Z outputs = self.mobilebert( 2025-08-14T21:57:01.4200338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4200429Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4200785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4200874Z layer_outputs = layer_module( 2025-08-14T21:57:01.4201297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4201498Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4201880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.4202042Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.4202388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.4202562Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4202921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4203029Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4203041Z 2025-08-14T21:57:01.4203140Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4203265Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4203516Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4203604Z return mod(**inputs) 2025-08-14T21:57:01.4203960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4204047Z outputs = self.mobilebert( 2025-08-14T21:57:01.4204403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4204490Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4204845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4204931Z layer_outputs = layer_module( 2025-08-14T21:57:01.4205276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.4205481Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.4205833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.4205998Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.4206347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.4206452Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.4206826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4206936Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4207018Z 2025-08-14T21:57:01.4207114Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4207214Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4207307Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4207437Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4207546Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4207640Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4207740Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4207830Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4207923Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4208020Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4208148Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4208394Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4208481Z return mod(**inputs) 2025-08-14T21:57:01.4208837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4208933Z outputs = self.mobilebert( 2025-08-14T21:57:01.4209281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4209393Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4209748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4209868Z layer_outputs = layer_module( 2025-08-14T21:57:01.4210224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.4210329Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.4210680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.4210837Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.4211185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.4211340Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4211751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4211863Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4211876Z 2025-08-14T21:57:01.4211980Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4212106Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4212356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4212443Z return mod(**inputs) 2025-08-14T21:57:01.4212796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4212889Z outputs = self.mobilebert( 2025-08-14T21:57:01.4213237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4213330Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4213705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4213795Z layer_outputs = layer_module( 2025-08-14T21:57:01.4214139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4214285Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4214632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4214776Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4215125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4215263Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4215275Z 2025-08-14T21:57:01.4215377Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4215503Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4215761Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4215845Z return mod(**inputs) 2025-08-14T21:57:01.4216203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4216300Z outputs = self.mobilebert( 2025-08-14T21:57:01.4216648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4216736Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4217088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4217197Z layer_outputs = layer_module( 2025-08-14T21:57:01.4217551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4217688Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4218036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4218199Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4218548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4218704Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4219051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4219164Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4219176Z 2025-08-14T21:57:01.4219283Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4219410Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4219657Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4219744Z return mod(**inputs) 2025-08-14T21:57:01.4220100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4220195Z outputs = self.mobilebert( 2025-08-14T21:57:01.4220545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4220633Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4220988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4221079Z layer_outputs = layer_module( 2025-08-14T21:57:01.4229855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4229993Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4230472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4230669Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4231158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4231322Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4231348Z 2025-08-14T21:57:01.4231449Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4231586Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4231842Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4231922Z return mod(**inputs) 2025-08-14T21:57:01.4232276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4232376Z outputs = self.mobilebert( 2025-08-14T21:57:01.4232726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4232823Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4233175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4233266Z layer_outputs = layer_module( 2025-08-14T21:57:01.4233624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4233764Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4234115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4234297Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4234644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4234805Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4235155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4235264Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4235277Z 2025-08-14T21:57:01.4235377Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4235502Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4235750Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4235832Z return mod(**inputs) 2025-08-14T21:57:01.4238312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4238409Z outputs = self.mobilebert( 2025-08-14T21:57:01.4238763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4238850Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4239204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4239292Z layer_outputs = layer_module( 2025-08-14T21:57:01.4239645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4239764Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4240136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4240283Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4240680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4240856Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4240870Z 2025-08-14T21:57:01.4240969Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4241160Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4241420Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4241501Z return mod(**inputs) 2025-08-14T21:57:01.4241857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4241950Z outputs = self.mobilebert( 2025-08-14T21:57:01.4242300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4242396Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4242745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4242832Z layer_outputs = layer_module( 2025-08-14T21:57:01.4243183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4243297Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4243655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4243828Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4244184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4244364Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4244715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4244827Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4244850Z 2025-08-14T21:57:01.4244947Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4245077Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4245331Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4245412Z return mod(**inputs) 2025-08-14T21:57:01.4245769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4245870Z outputs = self.mobilebert( 2025-08-14T21:57:01.4246217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4246315Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4246667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4246754Z layer_outputs = layer_module( 2025-08-14T21:57:01.4247108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.4247259Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.4247605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4247750Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4247762Z 2025-08-14T21:57:01.4247878Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4248014Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4248262Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4248342Z return mod(**inputs) 2025-08-14T21:57:01.4249104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4249198Z outputs = self.mobilebert( 2025-08-14T21:57:01.4249549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4249645Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4249994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4250092Z layer_outputs = layer_module( 2025-08-14T21:57:01.4250513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4250716Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4251141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.4251292Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.4251655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4251768Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4251780Z 2025-08-14T21:57:01.4251905Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4252037Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4252284Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4252373Z return mod(**inputs) 2025-08-14T21:57:01.4252756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4252845Z outputs = self.mobilebert( 2025-08-14T21:57:01.4253205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4253293Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4253639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4253734Z layer_outputs = layer_module( 2025-08-14T21:57:01.4254085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4254289Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4254636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.4254787Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.4255142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.4255289Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4255641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4255749Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4255763Z 2025-08-14T21:57:01.4255857Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4255989Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4256262Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4256346Z return mod(**inputs) 2025-08-14T21:57:01.4256704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4256792Z outputs = self.mobilebert( 2025-08-14T21:57:01.4257187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4257275Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4257624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4257718Z layer_outputs = layer_module( 2025-08-14T21:57:01.4258070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.4258277Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.4258628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.4258764Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.4259123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.4259229Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.4259577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4259698Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4259732Z 2025-08-14T21:57:01.4259828Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4259927Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4260021Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4260113Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4260237Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4260329Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4260419Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4260520Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4260612Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4260708Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4260835Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4261081Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4261170Z return mod(**inputs) 2025-08-14T21:57:01.4261530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4261618Z outputs = self.mobilebert( 2025-08-14T21:57:01.4261974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4262065Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4262421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4262510Z layer_outputs = layer_module( 2025-08-14T21:57:01.4262857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.4262972Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.4263421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.4263647Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.4264382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.4264579Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4264935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4269333Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4269346Z 2025-08-14T21:57:01.4269447Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4269639Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4269891Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4269984Z return mod(**inputs) 2025-08-14T21:57:01.4270351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4270439Z outputs = self.mobilebert( 2025-08-14T21:57:01.4270801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4270894Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4271245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4271346Z layer_outputs = layer_module( 2025-08-14T21:57:01.4271693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4271816Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4272169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4272339Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4272695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4272854Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4272867Z 2025-08-14T21:57:01.4272970Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4273100Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4273349Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4273440Z return mod(**inputs) 2025-08-14T21:57:01.4273798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4273886Z outputs = self.mobilebert( 2025-08-14T21:57:01.4274239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4274336Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4274689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4274779Z layer_outputs = layer_module( 2025-08-14T21:57:01.4275129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4275258Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4275603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4275760Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4276109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4276260Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4276647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4276761Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4276773Z 2025-08-14T21:57:01.4276878Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4277004Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4277272Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4277360Z return mod(**inputs) 2025-08-14T21:57:01.4277716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4277806Z outputs = self.mobilebert( 2025-08-14T21:57:01.4278164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4278257Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4278619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4278707Z layer_outputs = layer_module( 2025-08-14T21:57:01.4279056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4279180Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4279604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4279742Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4280160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4280323Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4280336Z 2025-08-14T21:57:01.4280442Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4280569Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4280840Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4280929Z return mod(**inputs) 2025-08-14T21:57:01.4281370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4281465Z outputs = self.mobilebert( 2025-08-14T21:57:01.4281814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4281901Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4282252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4282340Z layer_outputs = layer_module( 2025-08-14T21:57:01.4282686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4282808Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4283156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4283319Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4283667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4283816Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4284170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4284283Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4284296Z 2025-08-14T21:57:01.4284427Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4284553Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4284801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4284889Z return mod(**inputs) 2025-08-14T21:57:01.4285270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4285359Z outputs = self.mobilebert( 2025-08-14T21:57:01.4285716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4285804Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4286161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4286251Z layer_outputs = layer_module( 2025-08-14T21:57:01.4286599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4286722Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4287073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4287217Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4287565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4287702Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4287715Z 2025-08-14T21:57:01.4287819Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4287969Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4288215Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4288306Z return mod(**inputs) 2025-08-14T21:57:01.4288664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4288784Z outputs = self.mobilebert( 2025-08-14T21:57:01.4289134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4289227Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4289583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4289671Z layer_outputs = layer_module( 2025-08-14T21:57:01.4290024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4290139Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4290488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4290651Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4290999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4291155Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4291508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4291619Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4291631Z 2025-08-14T21:57:01.4291733Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4291862Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4292108Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4292216Z return mod(**inputs) 2025-08-14T21:57:01.4292574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4292668Z outputs = self.mobilebert( 2025-08-14T21:57:01.4293043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4293132Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4293488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4293573Z layer_outputs = layer_module( 2025-08-14T21:57:01.4293922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.4298306Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.4298700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4298852Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4298865Z 2025-08-14T21:57:01.4298967Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4299096Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4299358Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4299441Z return mod(**inputs) 2025-08-14T21:57:01.4299812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4299900Z outputs = self.mobilebert( 2025-08-14T21:57:01.4300281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4300383Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4300737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4300848Z layer_outputs = layer_module( 2025-08-14T21:57:01.4301206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4301404Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4301763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.4301912Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.4302262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4302379Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4302393Z 2025-08-14T21:57:01.4302489Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4302625Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4302922Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4303003Z return mod(**inputs) 2025-08-14T21:57:01.4303368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4303456Z outputs = self.mobilebert( 2025-08-14T21:57:01.4303808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4303903Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4304252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4304370Z layer_outputs = layer_module( 2025-08-14T21:57:01.4304718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4304917Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4305294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.4305448Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.4305801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.4305954Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4306304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4306421Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4306435Z 2025-08-14T21:57:01.4306529Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4306652Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4306903Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4306985Z return mod(**inputs) 2025-08-14T21:57:01.4307345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4307432Z outputs = self.mobilebert( 2025-08-14T21:57:01.4307778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4307897Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4308244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4308336Z layer_outputs = layer_module( 2025-08-14T21:57:01.4308787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.4309036Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.4309400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.4309534Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.4309881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.4309994Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.4310341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4310463Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4310477Z 2025-08-14T21:57:01.4310573Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4310667Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4310767Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4310859Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4310955Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4311104Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4311196Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4311295Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4311387Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4311480Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4311615Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4311862Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4311944Z return mod(**inputs) 2025-08-14T21:57:01.4312330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4312421Z outputs = self.mobilebert( 2025-08-14T21:57:01.4312796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4312887Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4313237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4313333Z layer_outputs = layer_module( 2025-08-14T21:57:01.4313681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.4313789Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.4314152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.4314308Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.4314665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.4314819Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4315167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4315284Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4315297Z 2025-08-14T21:57:01.4315390Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4315548Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4315796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4315877Z return mod(**inputs) 2025-08-14T21:57:01.4316244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4316360Z outputs = self.mobilebert( 2025-08-14T21:57:01.4316710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4316807Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4317157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4317250Z layer_outputs = layer_module( 2025-08-14T21:57:01.4317600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4317717Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4318074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4318212Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4318569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4318705Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4318718Z 2025-08-14T21:57:01.4318812Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4318945Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4319191Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4319271Z return mod(**inputs) 2025-08-14T21:57:01.4319633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4319726Z outputs = self.mobilebert( 2025-08-14T21:57:01.4320102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4320195Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4320542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4320656Z layer_outputs = layer_module( 2025-08-14T21:57:01.4321006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4321222Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4321569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4321728Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4322086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4322237Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4322586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4322703Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4322715Z 2025-08-14T21:57:01.4322810Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4327179Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4327433Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4327514Z return mod(**inputs) 2025-08-14T21:57:01.4327904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4327992Z outputs = self.mobilebert( 2025-08-14T21:57:01.4328349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4328460Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4328808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4328908Z layer_outputs = layer_module( 2025-08-14T21:57:01.4329256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4329370Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4329724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4329866Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4330225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4330364Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4330377Z 2025-08-14T21:57:01.4330472Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4330609Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4330857Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4330943Z return mod(**inputs) 2025-08-14T21:57:01.4331298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4331384Z outputs = self.mobilebert( 2025-08-14T21:57:01.4331737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4331825Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4332194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4332288Z layer_outputs = layer_module( 2025-08-14T21:57:01.4332637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4332774Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4333122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4333278Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4333631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4333783Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4334138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4334248Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4334261Z 2025-08-14T21:57:01.4334358Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4334490Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4334736Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4334817Z return mod(**inputs) 2025-08-14T21:57:01.4335178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4335266Z outputs = self.mobilebert( 2025-08-14T21:57:01.4335654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4335746Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4336096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4336213Z layer_outputs = layer_module( 2025-08-14T21:57:01.4336560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4336686Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4337034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4337170Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4337596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4337735Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4337748Z 2025-08-14T21:57:01.4337852Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4338029Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4338276Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4338364Z return mod(**inputs) 2025-08-14T21:57:01.4338721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4338812Z outputs = self.mobilebert( 2025-08-14T21:57:01.4339169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4339258Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4339617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4339706Z layer_outputs = layer_module( 2025-08-14T21:57:01.4340077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4340201Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4340552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4340724Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4341081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4341231Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4341587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4341701Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4341713Z 2025-08-14T21:57:01.4341814Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4341954Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4342251Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4342339Z return mod(**inputs) 2025-08-14T21:57:01.4342696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4342786Z outputs = self.mobilebert( 2025-08-14T21:57:01.4343144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4343234Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4343620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4343715Z layer_outputs = layer_module( 2025-08-14T21:57:01.4344068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.4344245Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.4344593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4344728Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4344740Z 2025-08-14T21:57:01.4344842Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4344968Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4345223Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4345304Z return mod(**inputs) 2025-08-14T21:57:01.4345660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4345755Z outputs = self.mobilebert( 2025-08-14T21:57:01.4346104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4346192Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4346551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4346637Z layer_outputs = layer_module( 2025-08-14T21:57:01.4346989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4347185Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4347532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.4347711Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.4348062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4348181Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4348193Z 2025-08-14T21:57:01.4348287Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4348431Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4349069Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4349156Z return mod(**inputs) 2025-08-14T21:57:01.4349511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4349614Z outputs = self.mobilebert( 2025-08-14T21:57:01.4349966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4350064Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4350413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4350501Z layer_outputs = layer_module( 2025-08-14T21:57:01.4350857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4351051Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4351413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.4351565Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.4360217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.4360412Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4360928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4361108Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4361150Z 2025-08-14T21:57:01.4361254Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4361397Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4361726Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4361810Z return mod(**inputs) 2025-08-14T21:57:01.4362299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4362402Z outputs = self.mobilebert( 2025-08-14T21:57:01.4362883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4362986Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4363470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4363562Z layer_outputs = layer_module( 2025-08-14T21:57:01.4363925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.4364126Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.4364478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.4364618Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.4364968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.4365115Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.4365463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4365574Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4365586Z 2025-08-14T21:57:01.4365714Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4365806Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4365904Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4365996Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4366086Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4366185Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4366282Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4366374Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4366545Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4366642Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4366774Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4367079Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4367163Z return mod(**inputs) 2025-08-14T21:57:01.4367532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4367621Z outputs = self.mobilebert( 2025-08-14T21:57:01.4367972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4368070Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4368418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4368532Z layer_outputs = layer_module( 2025-08-14T21:57:01.4368892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.4369021Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.4369381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.4369532Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.4369881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.4370038Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4370394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4370515Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4370529Z 2025-08-14T21:57:01.4370624Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4370753Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4371048Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4371145Z return mod(**inputs) 2025-08-14T21:57:01.4371504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4371599Z outputs = self.mobilebert( 2025-08-14T21:57:01.4371949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4372051Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4372397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4372485Z layer_outputs = layer_module( 2025-08-14T21:57:01.4372864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4372981Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4373336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4373495Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4373843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4373989Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4374001Z 2025-08-14T21:57:01.4374095Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4374231Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4374475Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4374560Z return mod(**inputs) 2025-08-14T21:57:01.4374920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4375007Z outputs = self.mobilebert( 2025-08-14T21:57:01.4375359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4375457Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4375806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4375900Z layer_outputs = layer_module( 2025-08-14T21:57:01.4376248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4376385Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4376745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4376921Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4377272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4377428Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4377777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4377896Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4377908Z 2025-08-14T21:57:01.4378002Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4378132Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4378385Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4378467Z return mod(**inputs) 2025-08-14T21:57:01.4378831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4378920Z outputs = self.mobilebert( 2025-08-14T21:57:01.4379272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4379367Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4379713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4379799Z layer_outputs = layer_module( 2025-08-14T21:57:01.4380150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4380270Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4380640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4380778Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4381208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4381397Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4381415Z 2025-08-14T21:57:01.4381543Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4381679Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4381926Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4382007Z return mod(**inputs) 2025-08-14T21:57:01.4382368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4382457Z outputs = self.mobilebert( 2025-08-14T21:57:01.4382806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4382904Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4383253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4383347Z layer_outputs = layer_module( 2025-08-14T21:57:01.4383693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4383813Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4384166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4384346Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4384706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4384877Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4385225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4385344Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4385357Z 2025-08-14T21:57:01.4385453Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4385587Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4385835Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4385920Z return mod(**inputs) 2025-08-14T21:57:01.4386284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4386373Z outputs = self.mobilebert( 2025-08-14T21:57:01.4386719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4386821Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4387168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4387265Z layer_outputs = layer_module( 2025-08-14T21:57:01.4387613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4387727Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4388080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4388217Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4388587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4388731Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4388744Z 2025-08-14T21:57:01.4388844Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4388978Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4389248Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4389329Z return mod(**inputs) 2025-08-14T21:57:01.4389691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4389779Z outputs = self.mobilebert( 2025-08-14T21:57:01.4390137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4390230Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4390581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4390677Z layer_outputs = layer_module( 2025-08-14T21:57:01.4391032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4391147Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4391499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4391654Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4392014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4392188Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4392537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4392683Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4392695Z 2025-08-14T21:57:01.4392790Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4392923Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4393172Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4393255Z return mod(**inputs) 2025-08-14T21:57:01.4393618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4393705Z outputs = self.mobilebert( 2025-08-14T21:57:01.4394057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4394153Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4394503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4394600Z layer_outputs = layer_module( 2025-08-14T21:57:01.4394950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.4395098Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.4401839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4401983Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4401996Z 2025-08-14T21:57:01.4402103Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4402230Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4402508Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4402598Z return mod(**inputs) 2025-08-14T21:57:01.4402958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4403044Z outputs = self.mobilebert( 2025-08-14T21:57:01.4403422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4403511Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4403867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4403957Z layer_outputs = layer_module( 2025-08-14T21:57:01.4404306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4404511Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4404858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.4405022Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.4405371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4405481Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4405494Z 2025-08-14T21:57:01.4405594Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4405724Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4405972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4406080Z return mod(**inputs) 2025-08-14T21:57:01.4406436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4406550Z outputs = self.mobilebert( 2025-08-14T21:57:01.4406899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4406986Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4407345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4407431Z layer_outputs = layer_module( 2025-08-14T21:57:01.4407784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4407982Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4408329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.4408491Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.4408842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.4408991Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4409349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4409460Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4409473Z 2025-08-14T21:57:01.4409577Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4409707Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4410025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4410118Z return mod(**inputs) 2025-08-14T21:57:01.4410576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4410676Z outputs = self.mobilebert( 2025-08-14T21:57:01.4411030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4411120Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4411496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4411584Z layer_outputs = layer_module( 2025-08-14T21:57:01.4411932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.4412145Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.4412496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.4412641Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.4412990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.4413097Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.4413457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4413569Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4413581Z 2025-08-14T21:57:01.4413684Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4413777Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4413892Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4413991Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4414082Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4414173Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4414277Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4414393Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4414485Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4414584Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4414710Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4414966Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4415049Z return mod(**inputs) 2025-08-14T21:57:01.4415405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4415500Z outputs = self.mobilebert( 2025-08-14T21:57:01.4415851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4415945Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4416292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4416381Z layer_outputs = layer_module( 2025-08-14T21:57:01.4416738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.4416850Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.4417199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.4417357Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.4417706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.4417867Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4418244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4418355Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4418368Z 2025-08-14T21:57:01.4418468Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4418593Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4418866Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4418949Z return mod(**inputs) 2025-08-14T21:57:01.4419303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4419401Z outputs = self.mobilebert( 2025-08-14T21:57:01.4419753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4419841Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4420194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4420282Z layer_outputs = layer_module( 2025-08-14T21:57:01.4420634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4420751Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4421101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4421247Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4421598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4421764Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4421776Z 2025-08-14T21:57:01.4421873Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4421998Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4422270Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4422351Z return mod(**inputs) 2025-08-14T21:57:01.4422708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4422805Z outputs = self.mobilebert( 2025-08-14T21:57:01.4423152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4423248Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4423595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4423685Z layer_outputs = layer_module( 2025-08-14T21:57:01.4424040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4424155Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4428758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4428929Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4429333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4429498Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4429846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4429959Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4429972Z 2025-08-14T21:57:01.4430097Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4430229Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4430484Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4430569Z return mod(**inputs) 2025-08-14T21:57:01.4430946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4431041Z outputs = self.mobilebert( 2025-08-14T21:57:01.4431389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4431478Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4431836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4431923Z layer_outputs = layer_module( 2025-08-14T21:57:01.4432281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4432398Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4432746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4432892Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4433242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4433388Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4433400Z 2025-08-14T21:57:01.4433499Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4433645Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4433898Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4433983Z return mod(**inputs) 2025-08-14T21:57:01.4434363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4434459Z outputs = self.mobilebert( 2025-08-14T21:57:01.4434810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4434906Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4435257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4435347Z layer_outputs = layer_module( 2025-08-14T21:57:01.4435700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4435819Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4436179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4436334Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4436682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4436844Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4437195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4437313Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4437325Z 2025-08-14T21:57:01.4437423Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4437550Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4437835Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4437916Z return mod(**inputs) 2025-08-14T21:57:01.4438274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4438372Z outputs = self.mobilebert( 2025-08-14T21:57:01.4438743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4438839Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4439263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4439355Z layer_outputs = layer_module( 2025-08-14T21:57:01.4439764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4439884Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4440235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4440384Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4440735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4440881Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4440894Z 2025-08-14T21:57:01.4440992Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4441180Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4441436Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4441544Z return mod(**inputs) 2025-08-14T21:57:01.4441910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4441997Z outputs = self.mobilebert( 2025-08-14T21:57:01.4447405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4447514Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4447883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4447976Z layer_outputs = layer_module( 2025-08-14T21:57:01.4448343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4448464Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4449228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4449408Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4449764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4449919Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4450279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4450399Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4450412Z 2025-08-14T21:57:01.4450523Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4450655Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4450907Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4451002Z return mod(**inputs) 2025-08-14T21:57:01.4451359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4451565Z outputs = self.mobilebert( 2025-08-14T21:57:01.4451918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4452009Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4452397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4452488Z layer_outputs = layer_module( 2025-08-14T21:57:01.4452844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.4453000Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.4453354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4457683Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4457701Z 2025-08-14T21:57:01.4457804Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4457937Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4458239Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4458323Z return mod(**inputs) 2025-08-14T21:57:01.4458703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4458798Z outputs = self.mobilebert( 2025-08-14T21:57:01.4459151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4459254Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4459647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4459737Z layer_outputs = layer_module( 2025-08-14T21:57:01.4460095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4460352Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4460716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.4460868Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.4461220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4461342Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4461356Z 2025-08-14T21:57:01.4461455Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4461597Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4461849Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4461935Z return mod(**inputs) 2025-08-14T21:57:01.4462358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4462447Z outputs = self.mobilebert( 2025-08-14T21:57:01.4462798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4462895Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4463244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4463344Z layer_outputs = layer_module( 2025-08-14T21:57:01.4463692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4463913Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4464273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.4464427Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.4464807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.4464959Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4465308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4465434Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4465446Z 2025-08-14T21:57:01.4465541Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4465676Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4465926Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4466011Z return mod(**inputs) 2025-08-14T21:57:01.4466374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4466465Z outputs = self.mobilebert( 2025-08-14T21:57:01.4466819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4466915Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4467264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4467387Z layer_outputs = layer_module( 2025-08-14T21:57:01.4467737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.4468017Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.4468422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.4468578Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.4468935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.4469043Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.4469391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4469511Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4469523Z 2025-08-14T21:57:01.4469617Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4469713Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4469817Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4469915Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4470020Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4470112Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4470204Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4470308Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4470401Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4470493Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4470681Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4470934Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4471021Z return mod(**inputs) 2025-08-14T21:57:01.4471386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4471506Z outputs = self.mobilebert( 2025-08-14T21:57:01.4471864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4471957Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4472331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4472430Z layer_outputs = layer_module( 2025-08-14T21:57:01.4472786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.4472902Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.4473249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.4473405Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.4473763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.4473917Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4474269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4474389Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4474402Z 2025-08-14T21:57:01.4474499Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4474633Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4474887Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4474991Z return mod(**inputs) 2025-08-14T21:57:01.4475358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4475448Z outputs = self.mobilebert( 2025-08-14T21:57:01.4475808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4475918Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4476269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4476366Z layer_outputs = layer_module( 2025-08-14T21:57:01.4476712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4476835Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4477192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4477338Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4477697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4477837Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4477850Z 2025-08-14T21:57:01.4477945Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4478079Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4478329Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4478419Z return mod(**inputs) 2025-08-14T21:57:01.4478780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4478869Z outputs = self.mobilebert( 2025-08-14T21:57:01.4479231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4479321Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4479690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4479792Z layer_outputs = layer_module( 2025-08-14T21:57:01.4480143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4480291Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4480645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4480799Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4481242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4481396Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4481752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4481864Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4481876Z 2025-08-14T21:57:01.4481976Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4482114Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4482364Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4486691Z return mod(**inputs) 2025-08-14T21:57:01.4487246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4487367Z outputs = self.mobilebert( 2025-08-14T21:57:01.4487939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4488057Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4488572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4488723Z layer_outputs = layer_module( 2025-08-14T21:57:01.4489240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4489396Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4489915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4490104Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4490631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4490817Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4490834Z 2025-08-14T21:57:01.4490965Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4491151Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4491509Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4491638Z return mod(**inputs) 2025-08-14T21:57:01.4492181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4492305Z outputs = self.mobilebert( 2025-08-14T21:57:01.4492861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4492989Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4493394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4493492Z layer_outputs = layer_module( 2025-08-14T21:57:01.4493886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4494014Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4494384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4494541Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4494896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4495049Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4495406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4495520Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4495533Z 2025-08-14T21:57:01.4495634Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4495773Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4496022Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4496105Z return mod(**inputs) 2025-08-14T21:57:01.4496678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4496796Z outputs = self.mobilebert( 2025-08-14T21:57:01.4497448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4497555Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4497957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4498053Z layer_outputs = layer_module( 2025-08-14T21:57:01.4498406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4498552Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4498906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4499045Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4499401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4499540Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4499552Z 2025-08-14T21:57:01.4499665Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4499793Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4500045Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4500135Z return mod(**inputs) 2025-08-14T21:57:01.4500496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4500586Z outputs = self.mobilebert( 2025-08-14T21:57:01.4500950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4501040Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4501399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4501487Z layer_outputs = layer_module( 2025-08-14T21:57:01.4501948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4502074Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4502450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4502609Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4503024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4503177Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4503541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4503658Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4503673Z 2025-08-14T21:57:01.4503768Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4503903Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4504152Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4504244Z return mod(**inputs) 2025-08-14T21:57:01.4504602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4504691Z outputs = self.mobilebert( 2025-08-14T21:57:01.4505051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4505144Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4505495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4505597Z layer_outputs = layer_module( 2025-08-14T21:57:01.4505973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.4506133Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.4506577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4506771Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4506785Z 2025-08-14T21:57:01.4506890Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4507019Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4507276Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4507361Z return mod(**inputs) 2025-08-14T21:57:01.4507719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4507821Z outputs = self.mobilebert( 2025-08-14T21:57:01.4508173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4508270Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4508629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4508717Z layer_outputs = layer_module( 2025-08-14T21:57:01.4509075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4509279Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4509637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.4509791Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.4510140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4510281Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4510296Z 2025-08-14T21:57:01.4510391Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4510518Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4510775Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4510877Z return mod(**inputs) 2025-08-14T21:57:01.4511242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4511331Z outputs = self.mobilebert( 2025-08-14T21:57:01.4520211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4520325Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4520814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4520925Z layer_outputs = layer_module( 2025-08-14T21:57:01.4521498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4521752Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4522234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.4522385Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.4522735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.4522919Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4523269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4523389Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4523425Z 2025-08-14T21:57:01.4523521Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4523647Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4523905Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4523987Z return mod(**inputs) 2025-08-14T21:57:01.4524347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4524438Z outputs = self.mobilebert( 2025-08-14T21:57:01.4524788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4524884Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4525233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4525321Z layer_outputs = layer_module( 2025-08-14T21:57:01.4525676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.4525880Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.4528360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.4528502Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.4528850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.4528970Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.4529341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4529462Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4529477Z 2025-08-14T21:57:01.4529570Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4529663Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4529759Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4529871Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4529967Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4530065Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4530156Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4530246Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4530342Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4530433Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4530619Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4530870Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4530953Z return mod(**inputs) 2025-08-14T21:57:01.4531318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4531406Z outputs = self.mobilebert( 2025-08-14T21:57:01.4531759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4531855Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4532202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4532294Z layer_outputs = layer_module( 2025-08-14T21:57:01.4532644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.4532775Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.4533137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.4533316Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.4533670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.4533828Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4534175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4534293Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4534306Z 2025-08-14T21:57:01.4534402Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4534537Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4534790Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4534877Z return mod(**inputs) 2025-08-14T21:57:01.4535246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4535336Z outputs = self.mobilebert( 2025-08-14T21:57:01.4535687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4535783Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4536131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4536225Z layer_outputs = layer_module( 2025-08-14T21:57:01.4536573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4536689Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4537069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4537210Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4537560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4537740Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4537753Z 2025-08-14T21:57:01.4537848Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4537983Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4538232Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4538316Z return mod(**inputs) 2025-08-14T21:57:01.4538685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4538775Z outputs = self.mobilebert( 2025-08-14T21:57:01.4539136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4539225Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4539577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4539671Z layer_outputs = layer_module( 2025-08-14T21:57:01.4540024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4540144Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4540585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4540768Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4541184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4541363Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4541719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4541843Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4541855Z 2025-08-14T21:57:01.4541954Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4542094Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4542347Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4542430Z return mod(**inputs) 2025-08-14T21:57:01.4542792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4542881Z outputs = self.mobilebert( 2025-08-14T21:57:01.4543232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4543328Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4543675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4543770Z layer_outputs = layer_module( 2025-08-14T21:57:01.4544116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4544230Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4544581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4544719Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4546114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4546263Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4546276Z 2025-08-14T21:57:01.4546375Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4546538Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4546786Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4546869Z return mod(**inputs) 2025-08-14T21:57:01.4549982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4550104Z outputs = self.mobilebert( 2025-08-14T21:57:01.4550478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4550574Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4550929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4551025Z layer_outputs = layer_module( 2025-08-14T21:57:01.4551386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4551502Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4551859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4552039Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4552448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4552607Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4552959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4553105Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4553118Z 2025-08-14T21:57:01.4553223Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4553349Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4553608Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4553690Z return mod(**inputs) 2025-08-14T21:57:01.4554049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4554151Z outputs = self.mobilebert( 2025-08-14T21:57:01.4554500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4554591Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4559197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4559292Z layer_outputs = layer_module( 2025-08-14T21:57:01.4559709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4559828Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4560181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4560327Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4560682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4560824Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4560873Z 2025-08-14T21:57:01.4560971Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4561182Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4561436Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4561518Z return mod(**inputs) 2025-08-14T21:57:01.4561874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4561969Z outputs = self.mobilebert( 2025-08-14T21:57:01.4562319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4562493Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4562846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4562934Z layer_outputs = layer_module( 2025-08-14T21:57:01.4563292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4563407Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4563755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4563916Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4564266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4564424Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4564795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4564906Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4564920Z 2025-08-14T21:57:01.4565047Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4565172Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4565428Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4565509Z return mod(**inputs) 2025-08-14T21:57:01.4565865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4565957Z outputs = self.mobilebert( 2025-08-14T21:57:01.4566307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4566404Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4566767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4566856Z layer_outputs = layer_module( 2025-08-14T21:57:01.4567213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.4567363Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.4567713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4567855Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4567867Z 2025-08-14T21:57:01.4567961Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4568092Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4568339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4568421Z return mod(**inputs) 2025-08-14T21:57:01.4568804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4568893Z outputs = self.mobilebert( 2025-08-14T21:57:01.4569249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4569344Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4569771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4569874Z layer_outputs = layer_module( 2025-08-14T21:57:01.4570275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4570514Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4570873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.4571026Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.4571381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4571493Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4571506Z 2025-08-14T21:57:01.4571604Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4571737Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4571982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4572062Z return mod(**inputs) 2025-08-14T21:57:01.4572426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4572554Z outputs = self.mobilebert( 2025-08-14T21:57:01.4572911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4573034Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4573386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4573484Z layer_outputs = layer_module( 2025-08-14T21:57:01.4573834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4574044Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4574391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.4574542Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.4574899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.4575053Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4575417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4575530Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4575542Z 2025-08-14T21:57:01.4575636Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4575769Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4576016Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4576098Z return mod(**inputs) 2025-08-14T21:57:01.4576462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4576552Z outputs = self.mobilebert( 2025-08-14T21:57:01.4576927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4577018Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4577365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4577456Z layer_outputs = layer_module( 2025-08-14T21:57:01.4577805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:57:01.4578006Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:57:01.4578392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:57:01.4578532Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:57:01.4578888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:57:01.4578995Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:57:01.4579344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4579458Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4579470Z 2025-08-14T21:57:01.4579565Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4579665Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4579756Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4579847Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4579945Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4580037Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4580150Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4580247Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4580338Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4580434Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4580591Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4580836Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4580923Z return mod(**inputs) 2025-08-14T21:57:01.4581279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4581368Z outputs = self.mobilebert( 2025-08-14T21:57:01.4581725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4581815Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4582173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4582261Z layer_outputs = layer_module( 2025-08-14T21:57:01.4582614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:57:01.4582728Z self_attention_outputs = self.attention( 2025-08-14T21:57:01.4583077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:57:01.4583228Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:57:01.4583583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:57:01.4583737Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4588318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4588435Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4588449Z 2025-08-14T21:57:01.4588619Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4588763Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4589016Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4589106Z return mod(**inputs) 2025-08-14T21:57:01.4589465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4589554Z outputs = self.mobilebert( 2025-08-14T21:57:01.4589910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4590035Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4590387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4590483Z layer_outputs = layer_module( 2025-08-14T21:57:01.4590838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4590967Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4591318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4591455Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4591814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4591949Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4591963Z 2025-08-14T21:57:01.4592087Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4592213Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4592465Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4592553Z return mod(**inputs) 2025-08-14T21:57:01.4592980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4593067Z outputs = self.mobilebert( 2025-08-14T21:57:01.4593423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4593512Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4593869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4593959Z layer_outputs = layer_module( 2025-08-14T21:57:01.4594308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4594430Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4594779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4594945Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4595293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4595443Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4595797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4595907Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4595921Z 2025-08-14T21:57:01.4596018Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4596154Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4596425Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4596517Z return mod(**inputs) 2025-08-14T21:57:01.4596872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4596959Z outputs = self.mobilebert( 2025-08-14T21:57:01.4597316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4597404Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4597762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4597874Z layer_outputs = layer_module( 2025-08-14T21:57:01.4598228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4598353Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4598778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4598949Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4599322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4599458Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4599470Z 2025-08-14T21:57:01.4599574Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4599700Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4599950Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4600063Z return mod(**inputs) 2025-08-14T21:57:01.4600419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4600509Z outputs = self.mobilebert( 2025-08-14T21:57:01.4600888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4600977Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4601421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4601507Z layer_outputs = layer_module( 2025-08-14T21:57:01.4601859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4601983Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4602335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4602498Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4602849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4602999Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4603355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4603469Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4603481Z 2025-08-14T21:57:01.4603584Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4603710Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4603959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4604049Z return mod(**inputs) 2025-08-14T21:57:01.4604425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4604513Z outputs = self.mobilebert( 2025-08-14T21:57:01.4604874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4604964Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4605320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4605407Z layer_outputs = layer_module( 2025-08-14T21:57:01.4605756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4605906Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4606257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:57:01.4606401Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:57:01.4606750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4606885Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4606898Z 2025-08-14T21:57:01.4607000Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4607123Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4607368Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4607454Z return mod(**inputs) 2025-08-14T21:57:01.4607814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4607946Z outputs = self.mobilebert( 2025-08-14T21:57:01.4608300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4608391Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4608769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4608858Z layer_outputs = layer_module( 2025-08-14T21:57:01.4609207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:57:01.4609329Z attention_output = ffn_module(attention_output) 2025-08-14T21:57:01.4609677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:57:01.4609835Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:57:01.4610186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:57:01.4610335Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4610687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4610801Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4610814Z 2025-08-14T21:57:01.4610917Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4611043Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4611294Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4611382Z return mod(**inputs) 2025-08-14T21:57:01.4611743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4611835Z outputs = self.mobilebert( 2025-08-14T21:57:01.4612191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4612306Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4612667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4612758Z layer_outputs = layer_module( 2025-08-14T21:57:01.4617344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:57:01.4617504Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:57:01.4617854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:57:01.4618034Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:57:01.4618049Z 2025-08-14T21:57:01.4618145Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4618271Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4618527Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4618612Z return mod(**inputs) 2025-08-14T21:57:01.4618970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4619068Z outputs = self.mobilebert( 2025-08-14T21:57:01.4619418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4619518Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4619869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4619989Z layer_outputs = layer_module( 2025-08-14T21:57:01.4620346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4620544Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4620921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:57:01.4621073Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:57:01.4621423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4621540Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4621552Z 2025-08-14T21:57:01.4621650Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4621786Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4622038Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4622121Z return mod(**inputs) 2025-08-14T21:57:01.4622491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:57:01.4622584Z outputs = self.mobilebert( 2025-08-14T21:57:01.4622931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:57:01.4623026Z encoder_outputs = self.encoder( 2025-08-14T21:57:01.4623374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:57:01.4623469Z layer_outputs = layer_module( 2025-08-14T21:57:01.4623821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:57:01.4624017Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:57:01.4624399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:57:01.4624552Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:57:01.4624908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:57:01.4625062Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:57:01.4625412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:57:01.4625532Z return input_tensor * self.weight + self.bias 2025-08-14T21:57:01.4625545Z 2025-08-14T21:57:01.4625639Z cudagraph partition due to non gpu ops 2025-08-14T21:57:01.4625791Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4626051Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4626135Z return mod(**inputs) 2025-08-14T21:57:01.4626499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1274, in forward 2025-08-14T21:57:01.4626631Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:57:01.4626644Z 2025-08-14T21:57:01.4626769Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:01.4627023Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:01.4627103Z return mod(**inputs) 2025-08-14T21:57:01.4627533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1275, in forward 2025-08-14T21:57:01.4627651Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:57:01.4627689Z 2025-08-14T21:57:13.8105586Z Compilation time (from dynamo_timed): 59.118169353 2025-08-14T21:57:13.8105948Z pass 2025-08-14T21:57:13.8107410Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:57:13.8108494Z TIMING: _recursive_pre_grad_passes:0.21516 _recursive_joint_graph_passes:1.90811 _recursive_post_grad_passes:0.28039 async_compile.wait:0.22558 code_gen:7.52067 inductor_compile:16.0667 backend_compile:42.44971 gc:0.00034 entire_frame_compile:59.11817 total_wall_time:59.11817 2025-08-14T21:57:13.8111868Z STATS: call_* op count: 1453 | FakeTensorMode.__torch_dispatch__:103267 | FakeTensor.__torch_dispatch__:12538 | ProxyTorchDispatchMode.__torch_dispatch__:23231 2025-08-14T21:57:13.8112513Z Dynamo produced 1 graphs covering 1453 ops with 0 graph breaks (0 unique) 2025-08-14T21:57:20.6010844Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:57:20.6011988Z from pkg_resources import resource_filename 2025-08-14T21:57:21.3314612Z 2025-08-14T21:57:24.2213570Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:57:24.2213935Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:57:24.2232043Z cpu eval OPTForCausalLM 2025-08-14T21:57:26.8158107Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:57:28.0699196Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:57:29.3343871Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:57:43.6649405Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6649830Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6654411Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6654767Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6655050Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6655572Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6655926Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6656253Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6656594Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6656966Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6657323Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6658578Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6658915Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6659254Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6659574Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6659896Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6660178Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6660675Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6661050Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6661447Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6662100Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6662702Z return mod(**inputs) 2025-08-14T21:57:43.6663369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6663982Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6664812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6665363Z outputs = self.model.decoder( 2025-08-14T21:57:43.6665811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6666258Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6666804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6667280Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6667716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6668235Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6668708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6669214Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6669705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6670204Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6670770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:43.6671382Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:43.6671621Z 2025-08-14T21:57:43.6671753Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6672208Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6672615Z return mod(**inputs) 2025-08-14T21:57:43.6673016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6673771Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6674363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6674991Z outputs = self.model.decoder( 2025-08-14T21:57:43.6675586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6676124Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6676698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6677419Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6678000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6678626Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6687887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6688713Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6689500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6690409Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6691209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:43.6691904Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:43.6692170Z 2025-08-14T21:57:43.6692279Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6692634Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6693017Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6695741Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6696158Z return mod(**inputs) 2025-08-14T21:57:43.6696569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6697007Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6697471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6697952Z outputs = self.model.decoder( 2025-08-14T21:57:43.6698513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6699038Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6699610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6700257Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6700805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6701415Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6702127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:57:43.6702824Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:57:43.6703026Z 2025-08-14T21:57:43.6703193Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6703452Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6703802Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6704143Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6704552Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6704822Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6705162Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6705469Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6705813Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6706311Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6706705Z return mod(**inputs) 2025-08-14T21:57:43.6707112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6707556Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6708105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6708623Z outputs = self.model.decoder( 2025-08-14T21:57:43.6709048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6709539Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6709999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6710466Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6710906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6711346Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6711814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6712310Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6712881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6713380Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6713937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:43.6714549Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:43.6714782Z 2025-08-14T21:57:43.6714921Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6715358Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6715768Z return mod(**inputs) 2025-08-14T21:57:43.6716175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6716611Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6717134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6717633Z outputs = self.model.decoder( 2025-08-14T21:57:43.6718063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6718526Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6718988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6719462Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6719884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6720335Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6720806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6721369Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6721861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6722357Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6727161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:43.6727739Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:43.6727945Z 2025-08-14T21:57:43.6728051Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6728316Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6728609Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6729047Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6729456Z return mod(**inputs) 2025-08-14T21:57:43.6729869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6730316Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6730773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6731266Z outputs = self.model.decoder( 2025-08-14T21:57:43.6731694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6732124Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6732582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6733042Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6733471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6733912Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6734405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:57:43.6734899Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:57:43.6735088Z 2025-08-14T21:57:43.6735194Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6735442Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6735692Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6735940Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6736179Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6736422Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6736669Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6736907Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6737259Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6737755Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6738151Z return mod(**inputs) 2025-08-14T21:57:43.6738593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6739040Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6739510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6739999Z outputs = self.model.decoder( 2025-08-14T21:57:43.6740425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6740867Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6741323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6741790Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6742222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6742671Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6743134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6743633Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6744132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6744652Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6745201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:43.6745801Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:43.6746033Z 2025-08-14T21:57:43.6746169Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6746615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6747018Z return mod(**inputs) 2025-08-14T21:57:43.6747427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6747872Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6748350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6749191Z outputs = self.model.decoder( 2025-08-14T21:57:43.6749627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6750066Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6750526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6750996Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6755675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6756180Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6756648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6757148Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6757648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6758130Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6758681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:43.6759257Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:43.6759461Z 2025-08-14T21:57:43.6759569Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6759821Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6760109Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6760612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6761008Z return mod(**inputs) 2025-08-14T21:57:43.6761495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6761976Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6762431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6762905Z outputs = self.model.decoder( 2025-08-14T21:57:43.6763340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6763780Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6764233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6764703Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6765134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6765580Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6766115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:57:43.6766665Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:57:43.6766855Z 2025-08-14T21:57:43.6766961Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6767349Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6767600Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6767845Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6768094Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6768328Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6768575Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6768830Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6769114Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6769574Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6770038Z return mod(**inputs) 2025-08-14T21:57:43.6770441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6770885Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6771344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6771808Z outputs = self.model.decoder( 2025-08-14T21:57:43.6772228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6772662Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6773151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6773614Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6774049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6774500Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6774973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6775467Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6775956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6776444Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6776994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:43.6777591Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:43.6777858Z 2025-08-14T21:57:43.6777989Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6778434Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6778854Z return mod(**inputs) 2025-08-14T21:57:43.6779260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6779698Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6780155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6784880Z outputs = self.model.decoder( 2025-08-14T21:57:43.6785371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6785810Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6786267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6786737Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6787169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6787624Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6788090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6788590Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6789086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6789579Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6790126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:43.6790705Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:43.6790908Z 2025-08-14T21:57:43.6791015Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6791268Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6791580Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6792027Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6792428Z return mod(**inputs) 2025-08-14T21:57:43.6792823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6793260Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6793728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6794196Z outputs = self.model.decoder( 2025-08-14T21:57:43.6794658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6795172Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6795695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6796156Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6796595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6797045Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6797506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:57:43.6798008Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:57:43.6798206Z 2025-08-14T21:57:43.6798306Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6798558Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6798809Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6799089Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6799334Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6799570Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6799813Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6800058Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6800358Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6800799Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6801266Z return mod(**inputs) 2025-08-14T21:57:43.6801666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6802100Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6802569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6803044Z outputs = self.model.decoder( 2025-08-14T21:57:43.6803469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6803911Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6804380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6804859Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6805288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6805742Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6806218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6806717Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6807203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6807700Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6808295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:43.6808900Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:43.6809144Z 2025-08-14T21:57:43.6809274Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6813959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6814417Z return mod(**inputs) 2025-08-14T21:57:43.6814816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6815257Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6815764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6816235Z outputs = self.model.decoder( 2025-08-14T21:57:43.6816661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6817098Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6817561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6818023Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6818458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6818913Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6819382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6819872Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6830484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6831129Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6831719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:43.6832339Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:43.6832547Z 2025-08-14T21:57:43.6832651Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6832911Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6833200Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6833662Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6834067Z return mod(**inputs) 2025-08-14T21:57:43.6834485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6834940Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6835411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6835895Z outputs = self.model.decoder( 2025-08-14T21:57:43.6836330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6836773Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6837235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6837709Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6838148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6847204Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6847881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:57:43.6848549Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:57:43.6849166Z 2025-08-14T21:57:43.6849290Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6849583Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6849972Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6850272Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6850533Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6850788Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6851041Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6851282Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6851574Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6852028Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6852452Z return mod(**inputs) 2025-08-14T21:57:43.6852928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6855335Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6855813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6856284Z outputs = self.model.decoder( 2025-08-14T21:57:43.6856724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6857177Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6857650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6858116Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6858555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6859014Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6859483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6860031Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6860531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6861065Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6861616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:43.6862222Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:43.6862468Z 2025-08-14T21:57:43.6862602Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6863051Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6863458Z return mod(**inputs) 2025-08-14T21:57:43.6863867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6864316Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6864779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6865254Z outputs = self.model.decoder( 2025-08-14T21:57:43.6865686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6866127Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6866593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6867060Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6867574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6868079Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6868545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6869049Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6869575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6870071Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6870632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:43.6871207Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:43.6871411Z 2025-08-14T21:57:43.6871523Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6871772Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6872058Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6872528Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6872933Z return mod(**inputs) 2025-08-14T21:57:43.6873343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6873792Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6874262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6874723Z outputs = self.model.decoder( 2025-08-14T21:57:43.6875155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6875593Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6876050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6876525Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6876963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6877442Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6877909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:57:43.6878433Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:57:43.6878624Z 2025-08-14T21:57:43.6878738Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6879000Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6879245Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6879489Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6879742Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6879977Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6880220Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6880463Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6880738Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6881283Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6881695Z return mod(**inputs) 2025-08-14T21:57:43.6886350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6886803Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6887271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6887747Z outputs = self.model.decoder( 2025-08-14T21:57:43.6888175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6888665Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6889138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6889613Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6890043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6890498Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6891005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6891505Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6892010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6892516Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6893073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:43.6893686Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:43.6893944Z 2025-08-14T21:57:43.6894078Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6894525Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6894937Z return mod(**inputs) 2025-08-14T21:57:43.6895354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6895803Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6896271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6896807Z outputs = self.model.decoder( 2025-08-14T21:57:43.6897276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6897714Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6898179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6898667Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6899093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6899549Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6900044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6900532Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6901023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6901514Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6902069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:43.6902636Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:43.6902846Z 2025-08-14T21:57:43.6902944Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6903199Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6903482Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6903924Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6904334Z return mod(**inputs) 2025-08-14T21:57:43.6904744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6905179Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6905648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6906125Z outputs = self.model.decoder( 2025-08-14T21:57:43.6906556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6906992Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6907464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6907926Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6908396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6908858Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6909328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:57:43.6909825Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:57:43.6910018Z 2025-08-14T21:57:43.6910113Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6910375Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6910619Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6910862Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6915384Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6915686Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6915927Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6916178Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6916455Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6916911Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6917308Z return mod(**inputs) 2025-08-14T21:57:43.6917716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6918154Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6918609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6919081Z outputs = self.model.decoder( 2025-08-14T21:57:43.6919509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6919998Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6920458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6920956Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6921470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6921912Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6922380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6922877Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6923371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6923860Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6924417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:43.6925021Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:43.6925271Z 2025-08-14T21:57:43.6925413Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6925958Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6926401Z return mod(**inputs) 2025-08-14T21:57:43.6926796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6927241Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6927711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6928193Z outputs = self.model.decoder( 2025-08-14T21:57:43.6928623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6929066Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6929566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6930027Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6930458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6930905Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6931372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6931861Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6932350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6932869Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6933422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:43.6933991Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:43.6934203Z 2025-08-14T21:57:43.6934302Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6934556Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6934830Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6935272Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6935677Z return mod(**inputs) 2025-08-14T21:57:43.6936088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6936524Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6936992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6937486Z outputs = self.model.decoder( 2025-08-14T21:57:43.6937908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6938371Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6938837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6939304Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6939734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6944443Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6944962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:57:43.6945458Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:57:43.6945660Z 2025-08-14T21:57:43.6945758Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6946022Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6946278Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6946519Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6946765Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6947010Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6947244Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6947489Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6947778Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6948215Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6948625Z return mod(**inputs) 2025-08-14T21:57:43.6949406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6949862Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6950334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6950820Z outputs = self.model.decoder( 2025-08-14T21:57:43.6951337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6951784Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6952248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6952712Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6953145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6953583Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6954098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6954673Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6955224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6955714Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6956265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:43.6956865Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:43.6957097Z 2025-08-14T21:57:43.6957234Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6957668Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6958071Z return mod(**inputs) 2025-08-14T21:57:43.6958478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6958963Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6959424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6959894Z outputs = self.model.decoder( 2025-08-14T21:57:43.6960324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6960795Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6961343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6961814Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6962242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6962692Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6963170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6963681Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6964169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6964670Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6965228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:43.6965792Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:43.6966006Z 2025-08-14T21:57:43.6966105Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6966362Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6966646Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6967086Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6967499Z return mod(**inputs) 2025-08-14T21:57:43.6967916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6968357Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6968864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6977748Z outputs = self.model.decoder( 2025-08-14T21:57:43.6978309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6978774Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6979243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6979720Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6980196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6980644Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6981110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:57:43.6981608Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:57:43.6981803Z 2025-08-14T21:57:43.6981900Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6982153Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6982403Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6982646Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6982885Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6983127Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6983372Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6983676Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.6983985Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6984452Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6984880Z return mod(**inputs) 2025-08-14T21:57:43.6985286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6985727Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6986232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6986691Z outputs = self.model.decoder( 2025-08-14T21:57:43.6987117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6987556Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6988014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6988476Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6988910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6989359Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6989817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6990325Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6990817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.6991301Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.6991856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:43.6992457Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:43.6992692Z 2025-08-14T21:57:43.6992831Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.6993273Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.6993678Z return mod(**inputs) 2025-08-14T21:57:43.6994109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6994551Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6995007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.6995485Z outputs = self.model.decoder( 2025-08-14T21:57:43.6995911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.6996345Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.6996810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.6997299Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.6997734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.6998260Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.6998786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.6999291Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.6999778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.7000280Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.7000836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:43.7001469Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:43.7001671Z 2025-08-14T21:57:43.7001772Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7002060Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7002346Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.7002841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.7003273Z return mod(**inputs) 2025-08-14T21:57:43.7003677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7004119Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7004585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.7005052Z outputs = self.model.decoder( 2025-08-14T21:57:43.7005474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7005913Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7006367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.7006865Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.7007314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.7007756Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.7008220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:57:43.7008711Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:57:43.7008906Z 2025-08-14T21:57:43.7009008Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7009253Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7009502Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7009745Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7009988Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7010232Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7010480Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7010715Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7010993Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.7011496Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.7011902Z return mod(**inputs) 2025-08-14T21:57:43.7012303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7019127Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7019598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.7020063Z outputs = self.model.decoder( 2025-08-14T21:57:43.7020501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7020975Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7021448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.7021911Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.7022349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.7022802Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.7023260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.7023762Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.7024257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.7024753Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.7025300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:43.7025945Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:43.7026189Z 2025-08-14T21:57:43.7026323Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.7026791Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.7027254Z return mod(**inputs) 2025-08-14T21:57:43.7027715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7028164Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7028624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.7029089Z outputs = self.model.decoder( 2025-08-14T21:57:43.7029518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7029959Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7030417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.7030887Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.7031325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.7031782Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.7032242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.7032736Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.7033231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.7033720Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.7034270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:43.7034840Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:43.7035041Z 2025-08-14T21:57:43.7035171Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7035423Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7035709Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.7036150Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.7036544Z return mod(**inputs) 2025-08-14T21:57:43.7036955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7037397Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7037887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.7038353Z outputs = self.model.decoder( 2025-08-14T21:57:43.7038783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7039229Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7039684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.7040155Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.7040592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.7041115Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.7045818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:57:43.7046371Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:57:43.7046563Z 2025-08-14T21:57:43.7046673Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7048050Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7048306Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7048559Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7049153Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7049396Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7049712Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7049962Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7050235Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.7050684Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.7051099Z return mod(**inputs) 2025-08-14T21:57:43.7051508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7051963Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7052447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.7052925Z outputs = self.model.decoder( 2025-08-14T21:57:43.7053352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7053794Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7054257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.7054720Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.7055142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.7055593Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.7056132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.7056688Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.7057184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.7057676Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.7058267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:43.7058869Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:43.7059111Z 2025-08-14T21:57:43.7059239Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.7059682Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.7060084Z return mod(**inputs) 2025-08-14T21:57:43.7060482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7060924Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7061424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.7061888Z outputs = self.model.decoder( 2025-08-14T21:57:43.7062320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7062759Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7063225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.7063682Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.7064113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.7064561Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.7065026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:57:43.7065532Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:43.7066080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:57:43.7066576Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:43.7067126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:43.7067722Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:43.7067929Z 2025-08-14T21:57:43.7068035Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7068292Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7068567Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.7069013Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.7069423Z return mod(**inputs) 2025-08-14T21:57:43.7069826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7070270Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7074914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:57:43.7075435Z outputs = self.model.decoder( 2025-08-14T21:57:43.7075880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7076330Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7076806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:57:43.7077268Z layer_outputs = decoder_layer( 2025-08-14T21:57:43.7077709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:43.7078163Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:43.7078650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:57:43.7079142Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:57:43.7079345Z 2025-08-14T21:57:43.7079474Z cudagraph partition due to non gpu ops 2025-08-14T21:57:43.7079762Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.7080196Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.7080600Z return mod(**inputs) 2025-08-14T21:57:43.7081013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7081557Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7082012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 841, in forward 2025-08-14T21:57:43.7082520Z logits = self.lm_head(outputs[0]).contiguous() 2025-08-14T21:57:43.7082713Z 2025-08-14T21:57:43.7082848Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:43.7083283Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:43.7083680Z return mod(**inputs) 2025-08-14T21:57:43.7084088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:57:43.7084524Z output = func(self, *args, **kwargs) 2025-08-14T21:57:43.7084980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 847, in forward 2025-08-14T21:57:43.7085560Z loss = self.loss_function( 2025-08-14T21:57:43.7086008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-08-14T21:57:43.7086590Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-08-14T21:57:43.7087198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-08-14T21:57:43.7087852Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-08-14T21:57:43.7088166Z 2025-08-14T21:57:52.9827438Z Compilation time (from dynamo_timed): 20.599577958 2025-08-14T21:57:53.0404062Z pass 2025-08-14T21:57:53.0404904Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:57:53.0406330Z TIMING: _recursive_pre_grad_passes:0.0583 _recursive_joint_graph_passes:0.75642 _recursive_post_grad_passes:0.12819 async_compile.wait:1.061 code_gen:7.61535 inductor_compile:11.39919 backend_compile:17.42874 gc:0.00072 entire_frame_compile:20.59958 total_wall_time:20.59958 2025-08-14T21:57:53.0407750Z STATS: call_* op count: 415 | FakeTensorMode.__torch_dispatch__:23751 | FakeTensor.__torch_dispatch__:3685 | ProxyTorchDispatchMode.__torch_dispatch__:5527 2025-08-14T21:57:53.0408401Z Dynamo produced 1 graphs covering 415 ops with 0 graph breaks (0 unique) 2025-08-14T21:57:59.9078811Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:57:59.9084202Z from pkg_resources import resource_filename 2025-08-14T21:58:00.6483143Z 2025-08-14T21:58:02.7361604Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:58:02.7361949Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:58:02.7377617Z cpu eval PLBartForCausalLM 2025-08-14T21:58:03.7327235Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:04.2403167Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:04.7336196Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:14.2237200Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2237728Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2238389Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2238672Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2238936Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2239202Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2239444Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2239702Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2239983Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2240243Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2240567Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2246732Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2247195Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2247795Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2248586Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2249634Z return mod(**inputs) 2025-08-14T21:58:14.2250243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2250819Z outputs = self.model.decoder( 2025-08-14T21:58:14.2251406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2251985Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2252521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2253028Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2253539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:14.2254177Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:14.2254696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:14.2255222Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:14.2255860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:14.2256464Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:14.2256770Z 2025-08-14T21:58:14.2256958Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2257412Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2257823Z return mod(**inputs) 2025-08-14T21:58:14.2258281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2258785Z outputs = self.model.decoder( 2025-08-14T21:58:14.2259269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2259981Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2260425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2260874Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2261455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:14.2261969Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:14.2262488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:14.2263002Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:14.2263552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:14.2264240Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:14.2264457Z 2025-08-14T21:58:14.2264564Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2264825Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2265106Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2265552Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2265958Z return mod(**inputs) 2025-08-14T21:58:14.2266424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2275332Z outputs = self.model.decoder( 2025-08-14T21:58:14.2276022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2276682Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2277240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2277842Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2278526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:58:14.2279086Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:14.2279571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:14.2280004Z return self.act(input) 2025-08-14T21:58:14.2280147Z 2025-08-14T21:58:14.2280261Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2280523Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2280769Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2281067Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2281469Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2281771Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2282027Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2282281Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2282610Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2283062Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2283467Z return mod(**inputs) 2025-08-14T21:58:14.2283936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2284436Z outputs = self.model.decoder( 2025-08-14T21:58:14.2284926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2285429Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2285919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2286373Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2286873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:14.2287398Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:14.2287907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:14.2288421Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:14.2288979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:14.2289580Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:14.2289823Z 2025-08-14T21:58:14.2289957Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2290405Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2290806Z return mod(**inputs) 2025-08-14T21:58:14.2291294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2291789Z outputs = self.model.decoder( 2025-08-14T21:58:14.2292272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2292758Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2293182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2293633Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2294173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:14.2294691Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:14.2295217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:14.2295820Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:14.2296443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:14.2297017Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:14.2297236Z 2025-08-14T21:58:14.2297337Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2297598Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2297888Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2298349Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2298758Z return mod(**inputs) 2025-08-14T21:58:14.2299248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2299743Z outputs = self.model.decoder( 2025-08-14T21:58:14.2300228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2300750Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2301184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2301630Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2302130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:58:14.2302681Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:14.2303168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:14.2303601Z return self.act(input) 2025-08-14T21:58:14.2303754Z 2025-08-14T21:58:14.2303855Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2304116Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2304365Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2304617Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2304877Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2305119Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2305367Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2305626Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2305905Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2306354Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2306756Z return mod(**inputs) 2025-08-14T21:58:14.2307219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2307711Z outputs = self.model.decoder( 2025-08-14T21:58:14.2308218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2308706Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2309133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2309584Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2310086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:14.2316980Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:14.2317493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:14.2318049Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:14.2318612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:14.2319271Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:14.2319513Z 2025-08-14T21:58:14.2319645Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2320092Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2320500Z return mod(**inputs) 2025-08-14T21:58:14.2320949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2321514Z outputs = self.model.decoder( 2025-08-14T21:58:14.2321989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2322475Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2322929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2323382Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2323887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:14.2324423Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:14.2325038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:14.2325589Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:14.2326144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:14.2326712Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:14.2326921Z 2025-08-14T21:58:14.2327023Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2327281Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2327570Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2328011Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2328419Z return mod(**inputs) 2025-08-14T21:58:14.2328880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2329365Z outputs = self.model.decoder( 2025-08-14T21:58:14.2329852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2330344Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2330777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2331231Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2331734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:58:14.2332284Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:14.2332798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:14.2333230Z return self.act(input) 2025-08-14T21:58:14.2333377Z 2025-08-14T21:58:14.2333476Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2333728Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2333972Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2334220Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2334471Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2334708Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2334959Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2335240Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2335519Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2335961Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2336362Z return mod(**inputs) 2025-08-14T21:58:14.2336823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2337313Z outputs = self.model.decoder( 2025-08-14T21:58:14.2337792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2351038Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2351622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2352097Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2352624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:14.2353280Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:14.2353949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:14.2354588Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:14.2355140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:14.2355760Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:14.2356009Z 2025-08-14T21:58:14.2356147Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2356609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2357014Z return mod(**inputs) 2025-08-14T21:58:14.2357493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2358005Z outputs = self.model.decoder( 2025-08-14T21:58:14.2358497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2358989Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2359435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2359890Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2360454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:14.2360986Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:14.2361605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:14.2362132Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:14.2362684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:14.2363314Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:14.2363527Z 2025-08-14T21:58:14.2363642Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2363903Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2364183Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2364639Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2365053Z return mod(**inputs) 2025-08-14T21:58:14.2365510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2366008Z outputs = self.model.decoder( 2025-08-14T21:58:14.2366526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2367026Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2367457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2367916Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2372556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:58:14.2373105Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:14.2373598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:14.2374034Z return self.act(input) 2025-08-14T21:58:14.2374175Z 2025-08-14T21:58:14.2374284Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2374534Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2374793Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2375076Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2375321Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2375568Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2375820Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2376061Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2376371Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2376824Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2377233Z return mod(**inputs) 2025-08-14T21:58:14.2377689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2378186Z outputs = self.model.decoder( 2025-08-14T21:58:14.2378672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2379161Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2379602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2380056Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2380556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:14.2381072Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:14.2381593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:14.2382117Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:14.2382751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:14.2383406Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:14.2383649Z 2025-08-14T21:58:14.2383791Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2384244Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2384645Z return mod(**inputs) 2025-08-14T21:58:14.2385139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2385640Z outputs = self.model.decoder( 2025-08-14T21:58:14.2386123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2386611Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2387047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2387557Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2388073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:14.2388601Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:14.2389124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:14.2389637Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:14.2390189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:14.2390763Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:14.2390974Z 2025-08-14T21:58:14.2391079Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2391345Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2391633Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2392081Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2392491Z return mod(**inputs) 2025-08-14T21:58:14.2392971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2393466Z outputs = self.model.decoder( 2025-08-14T21:58:14.2393954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2394475Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2394903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2395362Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2395861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:58:14.2396404Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:14.2396899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:14.2401413Z return self.act(input) 2025-08-14T21:58:14.2401556Z 2025-08-14T21:58:14.2401668Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2401924Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2402185Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2402441Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2402685Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2402940Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2403196Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2403434Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2403728Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2404187Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2404600Z return mod(**inputs) 2025-08-14T21:58:14.2405061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2405559Z outputs = self.model.decoder( 2025-08-14T21:58:14.2406081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2406576Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2407017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2407473Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2407973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:14.2408489Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:14.2409010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:14.2409554Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:14.2410115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:14.2410715Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:14.2410962Z 2025-08-14T21:58:14.2411094Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2411549Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2412054Z return mod(**inputs) 2025-08-14T21:58:14.2412541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2413043Z outputs = self.model.decoder( 2025-08-14T21:58:14.2413533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2414023Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2414493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2414955Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2415456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:14.2415993Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:14.2416567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:14.2417086Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:14.2417640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:14.2418208Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:14.2418420Z 2025-08-14T21:58:14.2418520Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2418777Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2419054Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2419499Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2419909Z return mod(**inputs) 2025-08-14T21:58:14.2420364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:58:14.2420847Z outputs = self.model.decoder( 2025-08-14T21:58:14.2421322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:14.2421809Z layer_outputs = decoder_layer( 2025-08-14T21:58:14.2422228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:14.2422684Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:14.2423181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:58:14.2423723Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:14.2424241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:14.2424669Z return self.act(input) 2025-08-14T21:58:14.2424808Z 2025-08-14T21:58:14.2424916Z cudagraph partition due to non gpu ops 2025-08-14T21:58:14.2425201Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2425649Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2426056Z return mod(**inputs) 2025-08-14T21:58:14.2435392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1694, in forward 2025-08-14T21:58:14.2436855Z logits = self.lm_head(outputs[0]) 2025-08-14T21:58:14.2437074Z 2025-08-14T21:58:14.2437228Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:14.2437817Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:14.2438236Z return mod(**inputs) 2025-08-14T21:58:14.2438688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1700, in forward 2025-08-14T21:58:14.2439272Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:58:14.2439523Z 2025-08-14T21:58:20.3340485Z Compilation time (from dynamo_timed): 13.581407863 2025-08-14T21:58:20.3741200Z pass 2025-08-14T21:58:20.3741634Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:20.3742802Z TIMING: _recursive_pre_grad_passes:0.03199 _recursive_joint_graph_passes:0.36697 _recursive_post_grad_passes:0.07656 async_compile.wait:1.00315 code_gen:5.24015 inductor_compile:8.27929 backend_compile:11.79544 gc:0.00186 entire_frame_compile:13.58141 total_wall_time:13.58141 2025-08-14T21:58:20.3744224Z STATS: call_* op count: 198 | FakeTensorMode.__torch_dispatch__:13155 | FakeTensor.__torch_dispatch__:2127 | ProxyTorchDispatchMode.__torch_dispatch__:2975 2025-08-14T21:58:20.3744913Z Dynamo produced 1 graphs covering 198 ops with 0 graph breaks (0 unique) 2025-08-14T21:58:27.1022935Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:58:27.1024038Z from pkg_resources import resource_filename 2025-08-14T21:58:27.8388854Z 2025-08-14T21:58:31.6713040Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:58:31.6713385Z loading model: 0it [00:03, ?it/s] 2025-08-14T21:58:31.6740552Z cpu eval PLBartForConditionalGeneration 2025-08-14T21:58:33.6319656Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:34.7046964Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:35.7560032Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:53.5285486Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5286143Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5286588Z return mod(**inputs) 2025-08-14T21:58:53.5287085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1357, in forward 2025-08-14T21:58:53.5287688Z decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id) 2025-08-14T21:58:53.5288338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1084, in shift_tokens_right 2025-08-14T21:58:53.5289049Z index_of_eos = (prev_output_tokens.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1) 2025-08-14T21:58:53.5289350Z 2025-08-14T21:58:53.5289796Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5290074Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5290335Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5290621Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5290870Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5291108Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5291357Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5291606Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5291841Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5292089Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5292331Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5292630Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5292883Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5293172Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5293716Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5294255Z return mod(**inputs) 2025-08-14T21:58:53.5294914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5295580Z outputs = self.model( 2025-08-14T21:58:53.5296275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5297012Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5297638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5298325Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5299059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5299642Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5300248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:58:53.5300897Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:58:53.5301504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5302141Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5302770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5303478Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5303823Z 2025-08-14T21:58:53.5308243Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5308708Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5309125Z return mod(**inputs) 2025-08-14T21:58:53.5309585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5310080Z outputs = self.model( 2025-08-14T21:58:53.5310543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5311028Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5311508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5311993Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5312563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5313104Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5313737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:58:53.5314455Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:58:53.5315087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5315714Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5316442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5317209Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5317421Z 2025-08-14T21:58:53.5317568Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5317901Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5318314Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5318852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5319250Z return mod(**inputs) 2025-08-14T21:58:53.5319714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5320205Z outputs = self.model( 2025-08-14T21:58:53.5320661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5321209Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5321689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5322222Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5322689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5323260Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5323859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 507, in forward 2025-08-14T21:58:53.5324511Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:53.5325151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:53.5325631Z return self.act(input) 2025-08-14T21:58:53.5325771Z 2025-08-14T21:58:53.5325876Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5326123Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5326377Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5326621Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5326855Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5327134Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5327463Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5327706Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5328060Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5328615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5329181Z return mod(**inputs) 2025-08-14T21:58:53.5329725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5330293Z outputs = self.model( 2025-08-14T21:58:53.5330839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5331452Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5332076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5332707Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5337514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5338067Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5338722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:58:53.5339370Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:58:53.5339989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5340676Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5341368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5342144Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5342473Z 2025-08-14T21:58:53.5342727Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5343389Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5343980Z return mod(**inputs) 2025-08-14T21:58:53.5344714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5345295Z outputs = self.model( 2025-08-14T21:58:53.5345936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5346656Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5347280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5347841Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5348272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5349027Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5349574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:58:53.5350101Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:58:53.5350612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5351186Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5351926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5352503Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5352710Z 2025-08-14T21:58:53.5352819Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5353084Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5353367Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5353958Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5354459Z return mod(**inputs) 2025-08-14T21:58:53.5354913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5355397Z outputs = self.model( 2025-08-14T21:58:53.5355855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5356353Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5356824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5357310Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5357742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5358182Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5358669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 507, in forward 2025-08-14T21:58:53.5359279Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:53.5359764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:53.5360178Z return self.act(input) 2025-08-14T21:58:53.5360320Z 2025-08-14T21:58:53.5360420Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5360672Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5360915Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5361256Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5361523Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5361798Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5366216Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5366509Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5366800Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5367242Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5367647Z return mod(**inputs) 2025-08-14T21:58:53.5368112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5368593Z outputs = self.model( 2025-08-14T21:58:53.5369047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5369533Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5370014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5370490Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5370922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5371411Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5371900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:58:53.5372445Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:58:53.5372950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5373463Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5374012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5374611Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5374852Z 2025-08-14T21:58:53.5375141Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5375594Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5376022Z return mod(**inputs) 2025-08-14T21:58:53.5376588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5377077Z outputs = self.model( 2025-08-14T21:58:53.5377528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5378017Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5378516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5379004Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5379437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5379900Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5380439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:58:53.5380950Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:58:53.5381480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5381998Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5382550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5383110Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5383322Z 2025-08-14T21:58:53.5383420Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5383671Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5383950Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5384464Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5384875Z return mod(**inputs) 2025-08-14T21:58:53.5385334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5385808Z outputs = self.model( 2025-08-14T21:58:53.5386257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5386743Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5387217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5387700Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5388126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5388578Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5389078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 507, in forward 2025-08-14T21:58:53.5389637Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:53.5390120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:53.5390614Z return self.act(input) 2025-08-14T21:58:53.5390754Z 2025-08-14T21:58:53.5390853Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5399517Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5399804Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5400091Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5400330Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5400573Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5400821Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5401144Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5401428Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5401884Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5402282Z return mod(**inputs) 2025-08-14T21:58:53.5402746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5403236Z outputs = self.model( 2025-08-14T21:58:53.5403700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5404187Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5404673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5405213Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5407773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5408229Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5408722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:58:53.5409263Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:58:53.5409765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5410279Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5410832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5411434Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5411669Z 2025-08-14T21:58:53.5411800Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5412275Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5412687Z return mod(**inputs) 2025-08-14T21:58:53.5413139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5413622Z outputs = self.model( 2025-08-14T21:58:53.5414075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5414562Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5415031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5415519Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5415952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5416390Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5416884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:58:53.5417414Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:58:53.5417916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5418442Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5418992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5419603Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5419805Z 2025-08-14T21:58:53.5419979Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5420225Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5420506Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5420952Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5421355Z return mod(**inputs) 2025-08-14T21:58:53.5421817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5422303Z outputs = self.model( 2025-08-14T21:58:53.5422766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5423242Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5423723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5424206Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5424631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5425078Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5425572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 507, in forward 2025-08-14T21:58:53.5426173Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:53.5426689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:53.5427121Z return self.act(input) 2025-08-14T21:58:53.5427260Z 2025-08-14T21:58:53.5427363Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5427611Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5427858Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5428104Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5428353Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5428591Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5428838Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5429085Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5429382Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5429832Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5430235Z return mod(**inputs) 2025-08-14T21:58:53.5430691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5431184Z outputs = self.model( 2025-08-14T21:58:53.5431640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5432130Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5432601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5433088Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5433521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5433978Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5438770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:58:53.5439285Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:58:53.5439788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5440318Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5440874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5441555Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5441787Z 2025-08-14T21:58:53.5441924Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5442362Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5442766Z return mod(**inputs) 2025-08-14T21:58:53.5443221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5443707Z outputs = self.model( 2025-08-14T21:58:53.5444151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5444641Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5445122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5445599Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5446038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5446483Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5446977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:58:53.5447477Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:58:53.5448010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5448573Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5449453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5450030Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5450243Z 2025-08-14T21:58:53.5450341Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5450597Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5450876Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5451317Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5451777Z return mod(**inputs) 2025-08-14T21:58:53.5452225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5452766Z outputs = self.model( 2025-08-14T21:58:53.5453331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5453824Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5454299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5454783Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5455212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5455660Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5456145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 507, in forward 2025-08-14T21:58:53.5456729Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:53.5457212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:53.5457632Z return self.act(input) 2025-08-14T21:58:53.5457807Z 2025-08-14T21:58:53.5457904Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5458159Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5458406Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5458643Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5458890Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5459132Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5459369Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5459613Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5459889Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5460329Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5460727Z return mod(**inputs) 2025-08-14T21:58:53.5461179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5461663Z outputs = self.model( 2025-08-14T21:58:53.5462109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5462591Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5463125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5467798Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5468233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5468685Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5469178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:58:53.5469682Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:58:53.5470229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5470750Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5471305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5471896Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5472136Z 2025-08-14T21:58:53.5472265Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5472709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5473125Z return mod(**inputs) 2025-08-14T21:58:53.5473585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5474072Z outputs = self.model( 2025-08-14T21:58:53.5474530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5475018Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5475500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5475991Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5476419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5476869Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5477362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:58:53.5478011Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:58:53.5478511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5479028Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5479606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5480176Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5480383Z 2025-08-14T21:58:53.5480491Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5480758Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5481130Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5481581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5482027Z return mod(**inputs) 2025-08-14T21:58:53.5482496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5482982Z outputs = self.model( 2025-08-14T21:58:53.5483434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:58:53.5483924Z encoder_outputs = self.encoder( 2025-08-14T21:58:53.5484407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:58:53.5484883Z layer_outputs = encoder_layer( 2025-08-14T21:58:53.5485317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5485762Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5486255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 507, in forward 2025-08-14T21:58:53.5486792Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:53.5487278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:53.5487724Z return self.act(input) 2025-08-14T21:58:53.5487865Z 2025-08-14T21:58:53.5487972Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5488215Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5488460Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5488701Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5488934Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5489177Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5489415Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5489650Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5489928Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5490408Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5490802Z return mod(**inputs) 2025-08-14T21:58:53.5491256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5491739Z outputs = self.model( 2025-08-14T21:58:53.5492257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5496976Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5497459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5497951Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5498393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5498836Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5499330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:53.5499888Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:53.5500398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5500954Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5501510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5502110Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5502346Z 2025-08-14T21:58:53.5502477Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5502922Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5503330Z return mod(**inputs) 2025-08-14T21:58:53.5503786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5504276Z outputs = self.model( 2025-08-14T21:58:53.5504734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5505231Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5505703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5506189Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5506667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5507189Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5507678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:53.5508199Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:53.5508718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5509248Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5509797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5510362Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5510563Z 2025-08-14T21:58:53.5510695Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5510955Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5511203Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5511447Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5511682Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5511921Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5512184Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5512428Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5512701Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5513142Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5513545Z return mod(**inputs) 2025-08-14T21:58:53.5513990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5514472Z outputs = self.model( 2025-08-14T21:58:53.5514923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5515410Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5515881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5516365Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5516816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5517256Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5517751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:58:53.5518304Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:58:53.5518831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5519338Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5519892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5520490Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5520720Z 2025-08-14T21:58:53.5520860Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5525649Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5526062Z return mod(**inputs) 2025-08-14T21:58:53.5526520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5526998Z outputs = self.model( 2025-08-14T21:58:53.5527455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5527946Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5528429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5528913Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5529348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5529805Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5530300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:58:53.5530862Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:58:53.5531391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5531907Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5532451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5533017Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5533216Z 2025-08-14T21:58:53.5533323Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5533573Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5533892Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5534338Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5534733Z return mod(**inputs) 2025-08-14T21:58:53.5535177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5535710Z outputs = self.model( 2025-08-14T21:58:53.5536233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5536717Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5537183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5537668Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5538094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5538555Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5539044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:58:53.5539597Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:53.5540123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:53.5540543Z return self.act(input) 2025-08-14T21:58:53.5540684Z 2025-08-14T21:58:53.5540780Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5541032Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5541270Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5541513Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5541754Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5542002Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5542239Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5542483Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5542761Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5543202Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5543608Z return mod(**inputs) 2025-08-14T21:58:53.5544115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5544594Z outputs = self.model( 2025-08-14T21:58:53.5545050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5545539Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5546019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5546495Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5546928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5547394Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5547910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:53.5548438Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:53.5549276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5549799Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5570244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5570999Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5571245Z 2025-08-14T21:58:53.5571494Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5571966Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5572391Z return mod(**inputs) 2025-08-14T21:58:53.5572868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5573366Z outputs = self.model( 2025-08-14T21:58:53.5573836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5574336Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5574825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5575316Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5575757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5576265Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5576764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:53.5577298Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:53.5577860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5578378Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5578929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5579697Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5579920Z 2025-08-14T21:58:53.5580026Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5580292Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5580545Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5580803Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5581063Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5581310Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5581565Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5581814Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5582097Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5582550Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5582954Z return mod(**inputs) 2025-08-14T21:58:53.5583475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5583957Z outputs = self.model( 2025-08-14T21:58:53.5584413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5584906Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5585385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5585874Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5586353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5586813Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5587302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:58:53.5587833Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:58:53.5588362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5588868Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5589448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5590053Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5590287Z 2025-08-14T21:58:53.5590428Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5590867Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5591270Z return mod(**inputs) 2025-08-14T21:58:53.5591726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5592212Z outputs = self.model( 2025-08-14T21:58:53.5592659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5593155Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5593697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5598428Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5598872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5599333Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5600904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:58:53.5601529Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:58:53.5602061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5602583Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5603142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5603712Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5603932Z 2025-08-14T21:58:53.5604034Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5604294Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5604576Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5605029Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5605432Z return mod(**inputs) 2025-08-14T21:58:53.5605888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5606366Z outputs = self.model( 2025-08-14T21:58:53.5606820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5607312Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5607787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5608339Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5608852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5609340Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5609830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:58:53.5610380Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:53.5610870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:53.5611299Z return self.act(input) 2025-08-14T21:58:53.5611436Z 2025-08-14T21:58:53.5611536Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5611794Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5612040Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5612366Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5612618Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5612863Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5613103Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5613349Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5613633Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5614072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5614474Z return mod(**inputs) 2025-08-14T21:58:53.5614933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5615424Z outputs = self.model( 2025-08-14T21:58:53.5615873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5616366Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5616879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5617369Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5617795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5618274Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5618769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:53.5619289Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:53.5619807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5620323Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5620882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5621477Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5621724Z 2025-08-14T21:58:53.5621857Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5622305Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5622766Z return mod(**inputs) 2025-08-14T21:58:53.5627452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5627941Z outputs = self.model( 2025-08-14T21:58:53.5628402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5628885Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5629372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5629866Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5630306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5630781Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5631285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:53.5631807Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:53.5632315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5632835Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5633393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5633986Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5634189Z 2025-08-14T21:58:53.5634290Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5634543Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5634805Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5635053Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5635293Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5635534Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5635778Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5636015Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5636296Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5636740Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5637179Z return mod(**inputs) 2025-08-14T21:58:53.5637706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5638212Z outputs = self.model( 2025-08-14T21:58:53.5638666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5639147Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5639626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5640152Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5640586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5641086Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5641625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:58:53.5642155Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:58:53.5642675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5643192Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5643750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5644347Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5644577Z 2025-08-14T21:58:53.5644707Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5645155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5645556Z return mod(**inputs) 2025-08-14T21:58:53.5646015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5646488Z outputs = self.model( 2025-08-14T21:58:53.5646941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5647426Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5647923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5648413Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5649194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5649644Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5650132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:58:53.5650659Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:58:53.5651189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5651830Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5656550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5657121Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5657326Z 2025-08-14T21:58:53.5657433Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5657684Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5657969Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5658422Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5658819Z return mod(**inputs) 2025-08-14T21:58:53.5659276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5659764Z outputs = self.model( 2025-08-14T21:58:53.5660231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5660752Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5661228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5661745Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5662174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5662612Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5663103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:58:53.5663657Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:53.5664134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:53.5664563Z return self.act(input) 2025-08-14T21:58:53.5664706Z 2025-08-14T21:58:53.5664801Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5665053Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5665293Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5665551Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5665793Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5666073Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5666314Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5666627Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5666899Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5667356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5667749Z return mod(**inputs) 2025-08-14T21:58:53.5668194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5668679Z outputs = self.model( 2025-08-14T21:58:53.5669131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5669609Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5670136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5670657Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5671089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5671531Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5672018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:53.5672536Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:53.5673066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5673581Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5674130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5674723Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5674953Z 2025-08-14T21:58:53.5675085Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5675521Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5675919Z return mod(**inputs) 2025-08-14T21:58:53.5676369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5676840Z outputs = self.model( 2025-08-14T21:58:53.5677291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5677799Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5678269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5678749Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5679197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5679641Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5680122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:53.5680683Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:53.5689699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5690382Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5691123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5691882Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5692147Z 2025-08-14T21:58:53.5692261Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5692551Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5692844Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5693130Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5693418Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5693695Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5693978Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5694247Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5694517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5694972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5695480Z return mod(**inputs) 2025-08-14T21:58:53.5695927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5696435Z outputs = self.model( 2025-08-14T21:58:53.5696884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5697368Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5697837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5698315Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5698738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5699211Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5699731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:58:53.5700256Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:58:53.5700777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5701283Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5701829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5702424Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5702655Z 2025-08-14T21:58:53.5702787Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5703221Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5703671Z return mod(**inputs) 2025-08-14T21:58:53.5704124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5704619Z outputs = self.model( 2025-08-14T21:58:53.5705073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5705579Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5706052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5706525Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5706954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5707398Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5707884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:58:53.5708403Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:58:53.5708928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5709440Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5710109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5710676Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5710880Z 2025-08-14T21:58:53.5710977Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5711225Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5711504Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5711994Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5712392Z return mod(**inputs) 2025-08-14T21:58:53.5712839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5713328Z outputs = self.model( 2025-08-14T21:58:53.5713811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5714299Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5714769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5715250Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5715677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5716125Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5716616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:58:53.5717177Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:53.5717661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:53.5718074Z return self.act(input) 2025-08-14T21:58:53.5718217Z 2025-08-14T21:58:53.5718313Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5718564Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5718802Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5719041Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5719279Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5719516Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5719748Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5719985Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5720260Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5720700Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5721200Z return mod(**inputs) 2025-08-14T21:58:53.5721650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5722129Z outputs = self.model( 2025-08-14T21:58:53.5722575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5723082Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5723562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5724088Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5730819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5731266Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5731757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:53.5732273Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:53.5732789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5733305Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5733848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5734444Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5734682Z 2025-08-14T21:58:53.5734812Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5735254Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5735650Z return mod(**inputs) 2025-08-14T21:58:53.5736108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5736591Z outputs = self.model( 2025-08-14T21:58:53.5737072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5737555Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5738033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5738555Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5739064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5739515Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5740008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:53.5740548Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:53.5741062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5741572Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5742125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5742745Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5742949Z 2025-08-14T21:58:53.5743049Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5743297Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5743546Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5743782Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5744021Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5744258Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5744489Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5744738Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5745043Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5745493Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5745885Z return mod(**inputs) 2025-08-14T21:58:53.5746371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5746856Z outputs = self.model( 2025-08-14T21:58:53.5747301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5747792Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5748266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5749085Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5749518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5749969Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5750469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:58:53.5750995Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:58:53.5751525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5752041Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5752602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5753240Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5757640Z 2025-08-14T21:58:53.5757773Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5758217Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5758613Z return mod(**inputs) 2025-08-14T21:58:53.5759111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5759594Z outputs = self.model( 2025-08-14T21:58:53.5760050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5760533Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5761009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5761593Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5762025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5762498Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5762995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:58:53.5763528Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:58:53.5764056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5764566Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5765125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5765694Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5765893Z 2025-08-14T21:58:53.5765994Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5766248Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5766533Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5767017Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5767410Z return mod(**inputs) 2025-08-14T21:58:53.5767989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5768510Z outputs = self.model( 2025-08-14T21:58:53.5768955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5769442Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5769919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5770408Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5770842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5771300Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5771848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:58:53.5772383Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:53.5772867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:53.5773291Z return self.act(input) 2025-08-14T21:58:53.5773423Z 2025-08-14T21:58:53.5773524Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5773764Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5774009Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5774249Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5774479Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5774723Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5774961Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5775197Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5775476Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5775917Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5776339Z return mod(**inputs) 2025-08-14T21:58:53.5776796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5777271Z outputs = self.model( 2025-08-14T21:58:53.5777722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5778202Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5778676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5779157Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5779604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5780047Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5780543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:53.5781067Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:53.5781579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5782138Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5786931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5787527Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5787758Z 2025-08-14T21:58:53.5787896Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5788333Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5788764Z return mod(**inputs) 2025-08-14T21:58:53.5789225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5789723Z outputs = self.model( 2025-08-14T21:58:53.5790173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5790661Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5791140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5791623Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5792052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5792501Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5792990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:58:53.5793505Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:58:53.5794021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5794533Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5795076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5795637Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5795839Z 2025-08-14T21:58:53.5795942Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5796193Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5796434Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5796729Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5797043Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5797279Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5797522Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5797786Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5798061Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5798502Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5798901Z return mod(**inputs) 2025-08-14T21:58:53.5799354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5799837Z outputs = self.model( 2025-08-14T21:58:53.5800286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5800813Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5801393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5801875Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5802306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5802757Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5803240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:58:53.5803771Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:58:53.5804301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5804810Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5805361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:58:53.5805980Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:53.5806211Z 2025-08-14T21:58:53.5806348Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5806788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5807211Z return mod(**inputs) 2025-08-14T21:58:53.5807666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5808149Z outputs = self.model( 2025-08-14T21:58:53.5808596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5809084Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5809561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5810038Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5810466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5810919Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5815698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:58:53.5816223Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:58:53.5816745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:58:53.5817256Z attn_output, attn_weights = attention_interface( 2025-08-14T21:58:53.5817801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:58:53.5818372Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:58:53.5818585Z 2025-08-14T21:58:53.5818684Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5818942Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5819220Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5819691Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5820097Z return mod(**inputs) 2025-08-14T21:58:53.5820548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:58:53.5821027Z outputs = self.model( 2025-08-14T21:58:53.5821481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:58:53.5821971Z decoder_outputs = self.decoder( 2025-08-14T21:58:53.5822460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:58:53.5822948Z layer_outputs = decoder_layer( 2025-08-14T21:58:53.5823377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:53.5823824Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:53.5824307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:58:53.5824854Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:58:53.5825333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:53.5825797Z return self.act(input) 2025-08-14T21:58:53.5826011Z 2025-08-14T21:58:53.5826108Z cudagraph partition due to non gpu ops 2025-08-14T21:58:53.5826391Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5826833Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5827246Z return mod(**inputs) 2025-08-14T21:58:53.5827698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1377, in forward 2025-08-14T21:58:53.5828195Z lm_logits = self.lm_head(outputs[0]) 2025-08-14T21:58:53.5828365Z 2025-08-14T21:58:53.5828518Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:53.5828955Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:53.5829346Z return mod(**inputs) 2025-08-14T21:58:53.5829837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1383, in forward 2025-08-14T21:58:53.5830432Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:58:53.5830708Z 2025-08-14T21:59:02.0402657Z Compilation time (from dynamo_timed): 23.564283899 2025-08-14T21:59:02.0729776Z pass 2025-08-14T21:59:02.0732297Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:02.0733320Z TIMING: _recursive_pre_grad_passes:0.07102 _recursive_joint_graph_passes:0.67292 _recursive_post_grad_passes:0.14274 async_compile.wait:0.97573 code_gen:6.69845 inductor_compile:11.31543 backend_compile:19.41454 gc:0.00259 entire_frame_compile:23.56428 total_wall_time:23.56428 2025-08-14T21:59:02.0734466Z STATS: call_* op count: 517 | FakeTensorMode.__torch_dispatch__:32810 | FakeTensor.__torch_dispatch__:5139 | ProxyTorchDispatchMode.__torch_dispatch__:7226 2025-08-14T21:59:02.0735097Z Dynamo produced 1 graphs covering 517 ops with 0 graph breaks (0 unique) 2025-08-14T21:59:08.5882775Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:59:08.5883881Z from pkg_resources import resource_filename 2025-08-14T21:59:09.3380586Z 2025-08-14T21:59:14.9874123Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:59:14.9874469Z loading model: 0it [00:05, ?it/s] 2025-08-14T21:59:14.9900986Z cpu eval PegasusForCausalLM 2025-08-14T21:59:15.7960587Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:16.2213761Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:16.6223480Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:31.1340181Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1340790Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1341164Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1341529Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1342133Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1342477Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1342794Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1343109Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1343429Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1343760Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1344039Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1344307Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1344626Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1344996Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1345315Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1345718Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1345965Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1346271Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1346514Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1346905Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1347557Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1347980Z return mod(**inputs) 2025-08-14T21:59:31.1348496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1349413Z outputs = self.model.decoder( 2025-08-14T21:59:31.1349911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1350421Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1350862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1351383Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1351888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1352429Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1352956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1353480Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1354043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:59:31.1354658Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:59:31.1354895Z 2025-08-14T21:59:31.1355037Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1355480Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1355889Z return mod(**inputs) 2025-08-14T21:59:31.1356501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1357127Z outputs = self.model.decoder( 2025-08-14T21:59:31.1357832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1358567Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1359261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1360061Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1360801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1365739Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1366378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1366975Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1367538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:59:31.1368115Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:31.1368325Z 2025-08-14T21:59:31.1368442Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1368697Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1368993Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1369451Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1369853Z return mod(**inputs) 2025-08-14T21:59:31.1370429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1371092Z outputs = self.model.decoder( 2025-08-14T21:59:31.1371703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1372441Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1373054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1373565Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1374264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:59:31.1374884Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:59:31.1375381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:59:31.1375863Z return self.act(input) 2025-08-14T21:59:31.1376008Z 2025-08-14T21:59:31.1376190Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1376529Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1376779Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1377025Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1377270Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1377511Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1377758Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1377998Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1378286Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1378736Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1379138Z return mod(**inputs) 2025-08-14T21:59:31.1379606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1380107Z outputs = self.model.decoder( 2025-08-14T21:59:31.1380596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1381092Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1381528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1381985Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1382519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1383051Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1383576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1384104Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1384652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:59:31.1385259Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:59:31.1385496Z 2025-08-14T21:59:31.1385706Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1386157Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1386558Z return mod(**inputs) 2025-08-14T21:59:31.1387025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1387527Z outputs = self.model.decoder( 2025-08-14T21:59:31.1388014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1388528Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1388967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1389427Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1389929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1394783Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1395331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1395895Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1396449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:59:31.1397038Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:31.1397249Z 2025-08-14T21:59:31.1397353Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1397599Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1397885Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1398328Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1398787Z return mod(**inputs) 2025-08-14T21:59:31.1399248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1399744Z outputs = self.model.decoder( 2025-08-14T21:59:31.1400234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1400724Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1401236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1401691Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1402181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:59:31.1402724Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:59:31.1403211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:59:31.1403641Z return self.act(input) 2025-08-14T21:59:31.1403781Z 2025-08-14T21:59:31.1403888Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1404170Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1404422Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1404720Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1404969Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1405294Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1405542Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1405781Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1406070Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1406516Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1406963Z return mod(**inputs) 2025-08-14T21:59:31.1407465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1407971Z outputs = self.model.decoder( 2025-08-14T21:59:31.1408460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1408952Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1409385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1409843Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1410348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1410869Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1411394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1411921Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1412494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:59:31.1413101Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:59:31.1413366Z 2025-08-14T21:59:31.1413501Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1430872Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1431561Z return mod(**inputs) 2025-08-14T21:59:31.1432110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1432634Z outputs = self.model.decoder( 2025-08-14T21:59:31.1433143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1433687Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1436307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1436775Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1437295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1437838Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1438428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1438957Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1439523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:59:31.1440096Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:31.1440319Z 2025-08-14T21:59:31.1440428Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1440689Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1440984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1441616Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1442040Z return mod(**inputs) 2025-08-14T21:59:31.1442518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1443019Z outputs = self.model.decoder( 2025-08-14T21:59:31.1443519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1444025Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1444467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1444951Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1445458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:59:31.1446017Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:59:31.1446515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:59:31.1446939Z return self.act(input) 2025-08-14T21:59:31.1447085Z 2025-08-14T21:59:31.1447185Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1447440Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1447686Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1447936Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1448252Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1448499Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1449194Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1449453Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1449822Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1450269Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1450681Z return mod(**inputs) 2025-08-14T21:59:31.1451193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1451695Z outputs = self.model.decoder( 2025-08-14T21:59:31.1452193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1452694Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1453136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1453586Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1454096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1454631Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1455153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1455692Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1456254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:59:31.1456869Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:59:31.1457108Z 2025-08-14T21:59:31.1457242Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1457693Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1458105Z return mod(**inputs) 2025-08-14T21:59:31.1458580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1459073Z outputs = self.model.decoder( 2025-08-14T21:59:31.1459606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1460110Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1460539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1461005Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1461512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1462045Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1462599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1467341Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1467903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:59:31.1468484Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:31.1468694Z 2025-08-14T21:59:31.1468798Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1469067Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1469358Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1469799Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1470204Z return mod(**inputs) 2025-08-14T21:59:31.1470678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1471180Z outputs = self.model.decoder( 2025-08-14T21:59:31.1471667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1472208Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1472653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1473130Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1473635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:59:31.1474195Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:59:31.1474684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:59:31.1475106Z return self.act(input) 2025-08-14T21:59:31.1475260Z 2025-08-14T21:59:31.1475360Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1475627Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1475876Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1476130Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1476384Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1476634Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1476876Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1477136Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1477463Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1477984Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1478395Z return mod(**inputs) 2025-08-14T21:59:31.1478872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1479367Z outputs = self.model.decoder( 2025-08-14T21:59:31.1479868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1480376Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1480816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1481368Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1481879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1482415Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1482955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1483482Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1484047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:59:31.1484678Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:59:31.1484917Z 2025-08-14T21:59:31.1485064Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1485514Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1485930Z return mod(**inputs) 2025-08-14T21:59:31.1486408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1486905Z outputs = self.model.decoder( 2025-08-14T21:59:31.1487395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1487897Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1488335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1488783Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1489288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1489845Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1490375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1490912Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1491471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:59:31.1496335Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:31.1496543Z 2025-08-14T21:59:31.1496653Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1496905Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1497196Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1497650Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1498057Z return mod(**inputs) 2025-08-14T21:59:31.1498536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1499049Z outputs = self.model.decoder( 2025-08-14T21:59:31.1499533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1500035Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1500476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1500930Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1501425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:59:31.1501977Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:59:31.1502463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:59:31.1502888Z return self.act(input) 2025-08-14T21:59:31.1503029Z 2025-08-14T21:59:31.1503153Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1503407Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1503659Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1503904Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1504146Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1504388Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1504625Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1504870Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1505151Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1505603Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1506024Z return mod(**inputs) 2025-08-14T21:59:31.1506617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1507130Z outputs = self.model.decoder( 2025-08-14T21:59:31.1507622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1508122Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1508558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1509012Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1509506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1510038Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1510574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1511133Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1511698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:59:31.1512333Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:59:31.1512567Z 2025-08-14T21:59:31.1512704Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1513142Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1513551Z return mod(**inputs) 2025-08-14T21:59:31.1514017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1514518Z outputs = self.model.decoder( 2025-08-14T21:59:31.1515003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1515509Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1515960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1516404Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1516904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1517426Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1517947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1518465Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1519022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:59:31.1519599Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:31.1519805Z 2025-08-14T21:59:31.1519915Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1520168Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1520475Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1520965Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1525687Z return mod(**inputs) 2025-08-14T21:59:31.1526154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1526656Z outputs = self.model.decoder( 2025-08-14T21:59:31.1527149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1527664Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1528143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1528590Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1529091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:59:31.1529640Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:59:31.1530131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:59:31.1530547Z return self.act(input) 2025-08-14T21:59:31.1530694Z 2025-08-14T21:59:31.1530792Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1531047Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1531291Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1531533Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1531777Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1532017Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1532257Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1532520Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1532801Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1533240Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1533668Z return mod(**inputs) 2025-08-14T21:59:31.1534131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1534622Z outputs = self.model.decoder( 2025-08-14T21:59:31.1535117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1535738Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1536171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1536615Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1537118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1537643Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1538165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1538679Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1539231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:59:31.1539835Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:59:31.1540064Z 2025-08-14T21:59:31.1540194Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1540636Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1541044Z return mod(**inputs) 2025-08-14T21:59:31.1541507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1542031Z outputs = self.model.decoder( 2025-08-14T21:59:31.1542524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1543032Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1543466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1543911Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1544407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1544937Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1545479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1546003Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1546561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:59:31.1547137Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:31.1547342Z 2025-08-14T21:59:31.1547440Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1547694Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1547982Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1548416Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1549181Z return mod(**inputs) 2025-08-14T21:59:31.1549679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1554382Z outputs = self.model.decoder( 2025-08-14T21:59:31.1554865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1555369Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1555803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1556288Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1556783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:59:31.1557333Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:59:31.1557824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:59:31.1558287Z return self.act(input) 2025-08-14T21:59:31.1558430Z 2025-08-14T21:59:31.1558530Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1558783Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1559035Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1559278Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1559525Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1559772Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1560012Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1560256Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1560535Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1560972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1561450Z return mod(**inputs) 2025-08-14T21:59:31.1561918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1562413Z outputs = self.model.decoder( 2025-08-14T21:59:31.1562904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1563400Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1563884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1564379Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1564959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1565491Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1566015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1566588Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1567185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:59:31.1567802Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:59:31.1568036Z 2025-08-14T21:59:31.1568174Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1568620Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1569032Z return mod(**inputs) 2025-08-14T21:59:31.1569502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1569992Z outputs = self.model.decoder( 2025-08-14T21:59:31.1570487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1570984Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1571418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1571868Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1572393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1572919Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1573467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1573990Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1574546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:59:31.1575122Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:31.1575323Z 2025-08-14T21:59:31.1575495Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1575749Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1576027Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1576472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1576874Z return mod(**inputs) 2025-08-14T21:59:31.1577333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1577837Z outputs = self.model.decoder( 2025-08-14T21:59:31.1578328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1578878Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1587792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1588387Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1588936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:59:31.1589490Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:59:31.1589971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:59:31.1590448Z return self.act(input) 2025-08-14T21:59:31.1590589Z 2025-08-14T21:59:31.1590696Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1590942Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1591191Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1591437Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1591683Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1591922Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1592166Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1592415Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1592689Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1593215Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1595758Z return mod(**inputs) 2025-08-14T21:59:31.1596219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1596723Z outputs = self.model.decoder( 2025-08-14T21:59:31.1597217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1597769Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1598196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1598652Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1599158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1599680Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1600213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1600771Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1601420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:59:31.1602043Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:59:31.1602283Z 2025-08-14T21:59:31.1602413Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1602859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1603264Z return mod(**inputs) 2025-08-14T21:59:31.1603728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1604230Z outputs = self.model.decoder( 2025-08-14T21:59:31.1604731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1605227Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1605667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1606127Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1606630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1607150Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1607731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1608330Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1608889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:59:31.1609462Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:31.1609673Z 2025-08-14T21:59:31.1609775Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1610086Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1610368Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1610817Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1611215Z return mod(**inputs) 2025-08-14T21:59:31.1611680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1612183Z outputs = self.model.decoder( 2025-08-14T21:59:31.1612669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1613165Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1613631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1614077Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1614579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:59:31.1615128Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:59:31.1615611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:59:31.1616038Z return self.act(input) 2025-08-14T21:59:31.1616182Z 2025-08-14T21:59:31.1616287Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1616540Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1616790Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1617041Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1617288Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1617565Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1617816Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1618065Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1618346Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1618801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1619238Z return mod(**inputs) 2025-08-14T21:59:31.1619697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1620196Z outputs = self.model.decoder( 2025-08-14T21:59:31.1620689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1621187Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1621616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1622072Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1626879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1627420Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1627945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1628472Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1629045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:59:31.1629647Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:59:31.1629895Z 2025-08-14T21:59:31.1630029Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1630491Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1630896Z return mod(**inputs) 2025-08-14T21:59:31.1631393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1631902Z outputs = self.model.decoder( 2025-08-14T21:59:31.1632391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1632883Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1633312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1633761Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1634258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1634800Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1635326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1635849Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1636402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:59:31.1637094Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:31.1637305Z 2025-08-14T21:59:31.1637405Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1637656Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1637940Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1638378Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1638790Z return mod(**inputs) 2025-08-14T21:59:31.1639260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1639778Z outputs = self.model.decoder( 2025-08-14T21:59:31.1640269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1640786Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1641293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1641746Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1642252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:59:31.1642805Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:59:31.1643284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:59:31.1643719Z return self.act(input) 2025-08-14T21:59:31.1643867Z 2025-08-14T21:59:31.1643968Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1644227Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1644473Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1644724Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1644977Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1645227Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1645478Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1645729Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1646004Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1646455Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1646859Z return mod(**inputs) 2025-08-14T21:59:31.1647327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1647822Z outputs = self.model.decoder( 2025-08-14T21:59:31.1648316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1649177Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1649673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1650129Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1650629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1651180Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1655894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1656420Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1657011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:59:31.1657616Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:59:31.1657855Z 2025-08-14T21:59:31.1657984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1658428Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1658828Z return mod(**inputs) 2025-08-14T21:59:31.1659287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1659789Z outputs = self.model.decoder( 2025-08-14T21:59:31.1660272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1660767Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1661200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1661682Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1662187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1662745Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1663259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1663781Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1664334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:59:31.1664906Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:31.1665116Z 2025-08-14T21:59:31.1665221Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1665482Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1665826Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1666335Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1666741Z return mod(**inputs) 2025-08-14T21:59:31.1667208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1667703Z outputs = self.model.decoder( 2025-08-14T21:59:31.1668201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1668698Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1669140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1669584Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1670089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:59:31.1670653Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:59:31.1671184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:59:31.1671635Z return self.act(input) 2025-08-14T21:59:31.1671781Z 2025-08-14T21:59:31.1671877Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1672129Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1672377Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1672621Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1672863Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1673108Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1673351Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1673599Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1673903Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1674343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1674746Z return mod(**inputs) 2025-08-14T21:59:31.1675211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1675707Z outputs = self.model.decoder( 2025-08-14T21:59:31.1676200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1676698Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1677187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1677638Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1678141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1678675Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1679218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1679746Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1680378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:59:31.1685283Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:59:31.1685520Z 2025-08-14T21:59:31.1685650Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1686096Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1686505Z return mod(**inputs) 2025-08-14T21:59:31.1686978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1687476Z outputs = self.model.decoder( 2025-08-14T21:59:31.1687965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1688467Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1688909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1689378Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1689883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:59:31.1690410Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:59:31.1690923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:59:31.1691450Z attn_output, attn_weights = attention_interface( 2025-08-14T21:59:31.1692009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:59:31.1692587Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:31.1692794Z 2025-08-14T21:59:31.1692926Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1693185Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1693467Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1693902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1694304Z return mod(**inputs) 2025-08-14T21:59:31.1694823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:59:31.1695399Z outputs = self.model.decoder( 2025-08-14T21:59:31.1695908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:59:31.1696412Z layer_outputs = decoder_layer( 2025-08-14T21:59:31.1696850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:31.1697294Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:31.1697791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:59:31.1698339Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:59:31.1698824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:59:31.1699241Z return self.act(input) 2025-08-14T21:59:31.1699385Z 2025-08-14T21:59:31.1699484Z cudagraph partition due to non gpu ops 2025-08-14T21:59:31.1699775Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1700224Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1700647Z return mod(**inputs) 2025-08-14T21:59:31.1701113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1650, in forward 2025-08-14T21:59:31.1701614Z logits = self.lm_head(outputs[0]) 2025-08-14T21:59:31.1701801Z 2025-08-14T21:59:31.1701930Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:31.1702374Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:31.1702779Z return mod(**inputs) 2025-08-14T21:59:31.1703242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1656, in forward 2025-08-14T21:59:31.1703820Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:59:31.1704080Z 2025-08-14T21:59:38.0565409Z Compilation time (from dynamo_timed): 19.671580732 2025-08-14T21:59:38.0618435Z pass 2025-08-14T21:59:38.0618891Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:38.0620335Z TIMING: _recursive_pre_grad_passes:0.05265 _recursive_joint_graph_passes:0.80939 _recursive_post_grad_passes:0.10883 async_compile.wait:0.82296 code_gen:6.55422 inductor_compile:10.30452 backend_compile:16.58892 gc:0.00024 entire_frame_compile:19.67158 total_wall_time:19.67158 2025-08-14T21:59:38.0621489Z STATS: call_* op count: 369 | FakeTensorMode.__torch_dispatch__:24794 | FakeTensor.__torch_dispatch__:3939 | ProxyTorchDispatchMode.__torch_dispatch__:5623 2025-08-14T21:59:38.0622112Z Dynamo produced 1 graphs covering 369 ops with 0 graph breaks (0 unique) 2025-08-14T21:59:44.6092584Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:59:44.6093687Z from pkg_resources import resource_filename 2025-08-14T21:59:45.3239465Z 2025-08-14T21:59:54.9362440Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:59:54.9362797Z loading model: 0it [00:09, ?it/s] 2025-08-14T21:59:54.9403061Z cpu eval PegasusForConditionalGeneration 2025-08-14T21:59:56.5252632Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:57.4036546Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:58.2779496Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:00:29.0830410Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0830774Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0831077Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0831365Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0831887Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0832158Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0832404Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0832645Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0832901Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0833261Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0833651Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0834033Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0834385Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0834704Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0835073Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0835322Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0835599Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0835844Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0836119Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0836448Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0837124Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.0837629Z return mod(**inputs) 2025-08-14T22:00:29.0838198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.0838861Z outputs = self.model( 2025-08-14T22:00:29.0839448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.0840090Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.0840594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.0841095Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.0841630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.0842091Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.0842603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.0843130Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.0843641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.0844217Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.0844873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.0845614Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.0845897Z 2025-08-14T22:00:29.0846042Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0846552Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.0846991Z return mod(**inputs) 2025-08-14T22:00:29.0847521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.0848172Z outputs = self.model( 2025-08-14T22:00:29.0849151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.0849826Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.0850444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.0851130Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.0851655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.0852106Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.0852729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.0853302Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.0853924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.0860724Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.0861286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.0861865Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.0862074Z 2025-08-14T22:00:29.0862184Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0862446Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0862853Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0863406Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.0863934Z return mod(**inputs) 2025-08-14T22:00:29.0864505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.0865075Z outputs = self.model( 2025-08-14T22:00:29.0865686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.0866270Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.0866818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.0867420Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.0867950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.0868508Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.0869104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T22:00:29.0869662Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.0870147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.0870575Z return self.act(input) 2025-08-14T22:00:29.0870715Z 2025-08-14T22:00:29.0870825Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0871077Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0871333Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0871586Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0871830Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0872066Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0872308Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0872559Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0872834Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0873291Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.0873702Z return mod(**inputs) 2025-08-14T22:00:29.0874215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.0874965Z outputs = self.model( 2025-08-14T22:00:29.0875635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.0876243Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.0876871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.0877375Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.0877971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.0878530Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.0879140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.0879802Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.0880389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.0881014Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.0881770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.0882466Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.0882744Z 2025-08-14T22:00:29.0882922Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0887631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.0889324Z return mod(**inputs) 2025-08-14T22:00:29.0889858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.0890443Z outputs = self.model( 2025-08-14T22:00:29.0891068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.0891660Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.0892277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.0892829Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.0893361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.0893875Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.0894489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.0895093Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.0895682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.0896297Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.0896919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.0897702Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.0897921Z 2025-08-14T22:00:29.0911630Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0916433Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0916748Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0917434Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.0917951Z return mod(**inputs) 2025-08-14T22:00:29.0918529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.0919189Z outputs = self.model( 2025-08-14T22:00:29.0919782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.0920405Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.0920986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.0921669Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.0922193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.0922658Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.0923351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T22:00:29.0923977Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.0924514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.0925056Z return self.act(input) 2025-08-14T22:00:29.0925206Z 2025-08-14T22:00:29.0925364Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0925689Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0925977Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0926282Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0926537Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0926871Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0927120Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0927358Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0927649Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0928157Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.0928562Z return mod(**inputs) 2025-08-14T22:00:29.0929040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.0929567Z outputs = self.model( 2025-08-14T22:00:29.0930039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.0930607Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.0931110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.0931607Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.0932039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.0932498Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.0933006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.0933535Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.0934040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.0934568Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.0935132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.0935966Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.0936233Z 2025-08-14T22:00:29.0936377Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0936941Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.0937443Z return mod(**inputs) 2025-08-14T22:00:29.0937980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.0938672Z outputs = self.model( 2025-08-14T22:00:29.0939266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.0939918Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.0940450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.0941050Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.0945714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.0946173Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.0946704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.0947239Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.0947771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.0948294Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.0949237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.0949825Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.0950059Z 2025-08-14T22:00:29.0950199Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0950502Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0950930Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0951460Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.0952046Z return mod(**inputs) 2025-08-14T22:00:29.0952587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.0953085Z outputs = self.model( 2025-08-14T22:00:29.0953670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.0954361Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.0954967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.0955531Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.0956057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.0956502Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.0957002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T22:00:29.0957560Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.0958044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.0958471Z return self.act(input) 2025-08-14T22:00:29.0958615Z 2025-08-14T22:00:29.0958718Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0958973Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0959217Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0959538Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0959782Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0960028Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0960349Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0960604Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0960882Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0961418Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.0961832Z return mod(**inputs) 2025-08-14T22:00:29.0962355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.0962847Z outputs = self.model( 2025-08-14T22:00:29.0963317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.0963867Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.0964404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.0964949Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.0965487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.0966032Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.0966616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.0967186Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.0967800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.0968421Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.0969020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.0969746Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.0970003Z 2025-08-14T22:00:29.0978394Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0978996Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.0979566Z return mod(**inputs) 2025-08-14T22:00:29.0980179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.0980842Z outputs = self.model( 2025-08-14T22:00:29.0981458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.0982157Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.0982646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.0983146Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.0983575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.0984023Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.0986770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.0987298Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.0987802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.0988340Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.0988921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.0989484Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.0989694Z 2025-08-14T22:00:29.0989792Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0990050Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0990338Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0990774Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.0991175Z return mod(**inputs) 2025-08-14T22:00:29.0991648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.0992135Z outputs = self.model( 2025-08-14T22:00:29.0992675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.0993172Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.0993656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.0994137Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.0994567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.0995010Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.0995531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T22:00:29.0996077Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.0996562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.0996991Z return self.act(input) 2025-08-14T22:00:29.0997130Z 2025-08-14T22:00:29.0997226Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0997481Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0997732Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0997976Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0998214Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0998463Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0998738Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0998991Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.0999349Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.0999801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1000240Z return mod(**inputs) 2025-08-14T22:00:29.1000710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1001357Z outputs = self.model( 2025-08-14T22:00:29.1001827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1002319Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1002814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1003316Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1003740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1004192Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1004695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1005215Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1005723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1006245Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1006804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1007402Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1007643Z 2025-08-14T22:00:29.1007775Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1008221Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1008625Z return mod(**inputs) 2025-08-14T22:00:29.1009090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1009588Z outputs = self.model( 2025-08-14T22:00:29.1010084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1010592Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1011073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1011565Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1011998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1012440Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1012961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1013538Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1018316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1018832Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1019401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1019982Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1020190Z 2025-08-14T22:00:29.1020292Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1020561Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1020865Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1021317Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1021730Z return mod(**inputs) 2025-08-14T22:00:29.1022223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1022712Z outputs = self.model( 2025-08-14T22:00:29.1023168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1023682Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1024166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1024655Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1025089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1025539Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1026042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T22:00:29.1026587Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1027076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1027498Z return self.act(input) 2025-08-14T22:00:29.1027635Z 2025-08-14T22:00:29.1027787Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1028031Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1028352Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1028607Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1028844Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1029086Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1029331Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1029568Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1029853Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1030303Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1030703Z return mod(**inputs) 2025-08-14T22:00:29.1031201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1031693Z outputs = self.model( 2025-08-14T22:00:29.1032210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1032699Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1033191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1033688Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1034115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1034586Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1035094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1035611Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1036117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1036641Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1037201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1037796Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1038029Z 2025-08-14T22:00:29.1038158Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1038605Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1039011Z return mod(**inputs) 2025-08-14T22:00:29.1039503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1039986Z outputs = self.model( 2025-08-14T22:00:29.1040448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1040966Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1041537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1042038Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1042519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1047190Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1047684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1048201Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1049036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1049554Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1050106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1050678Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1050882Z 2025-08-14T22:00:29.1050989Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1051233Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1051523Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1051966Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1052366Z return mod(**inputs) 2025-08-14T22:00:29.1052824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1053315Z outputs = self.model( 2025-08-14T22:00:29.1053841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1054339Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1054821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1055309Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1055737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1056177Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1056733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T22:00:29.1057396Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1057878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1058308Z return self.act(input) 2025-08-14T22:00:29.1058456Z 2025-08-14T22:00:29.1058555Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1058810Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1059052Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1059301Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1059548Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1059788Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1060029Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1060282Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1060557Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1061062Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1061511Z return mod(**inputs) 2025-08-14T22:00:29.1061976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1062464Z outputs = self.model( 2025-08-14T22:00:29.1062967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1063464Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1063950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1064450Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1064890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1065347Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1065843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1066359Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1066872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1067392Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1067937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1068535Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1068766Z 2025-08-14T22:00:29.1068906Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1069341Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1069745Z return mod(**inputs) 2025-08-14T22:00:29.1070209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1070704Z outputs = self.model( 2025-08-14T22:00:29.1071201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1075908Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1076400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1076897Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1077325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1077776Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1078313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1078832Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1079339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1079861Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1080414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1080978Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1081186Z 2025-08-14T22:00:29.1081371Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1081632Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1081914Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1082356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1082765Z return mod(**inputs) 2025-08-14T22:00:29.1083231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1083740Z outputs = self.model( 2025-08-14T22:00:29.1084205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1084724Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1085206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1085720Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1086251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1086708Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1087203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T22:00:29.1087761Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1088246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1088673Z return self.act(input) 2025-08-14T22:00:29.1088811Z 2025-08-14T22:00:29.1088911Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1089165Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1089422Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1089661Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1089956Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1090208Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1090447Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1090692Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1090980Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1091429Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1091827Z return mod(**inputs) 2025-08-14T22:00:29.1092292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1092811Z outputs = self.model( 2025-08-14T22:00:29.1093271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1093762Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1094242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1094733Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1095157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1095603Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1096129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1096641Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1097158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1097678Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1098233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1098819Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1099060Z 2025-08-14T22:00:29.1099188Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1099626Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1100027Z return mod(**inputs) 2025-08-14T22:00:29.1100543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1105319Z outputs = self.model( 2025-08-14T22:00:29.1105789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1106301Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1106795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1107288Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1107728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1108174Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1108681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1109209Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1109726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1110240Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1110800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1111367Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1111569Z 2025-08-14T22:00:29.1111672Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1111927Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1112213Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1112660Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1113052Z return mod(**inputs) 2025-08-14T22:00:29.1113517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1114008Z outputs = self.model( 2025-08-14T22:00:29.1114491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1115051Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1115614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1116108Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1116536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1116988Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1117487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T22:00:29.1118061Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1118544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1119016Z return self.act(input) 2025-08-14T22:00:29.1119156Z 2025-08-14T22:00:29.1119265Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1119515Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1119762Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1120006Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1120240Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1120482Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1120723Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1120965Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1121237Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1121776Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1122211Z return mod(**inputs) 2025-08-14T22:00:29.1122672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1123170Z outputs = self.model( 2025-08-14T22:00:29.1123657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1124148Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1124627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1125121Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1125562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1126009Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1126512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1127033Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1127547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1128063Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1128616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1129254Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1129513Z 2025-08-14T22:00:29.1138033Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1138604Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1139126Z return mod(**inputs) 2025-08-14T22:00:29.1139750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1140273Z outputs = self.model( 2025-08-14T22:00:29.1140777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1141276Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1141764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1142255Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1142687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1143144Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1143639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1146379Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1146894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1147420Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1148008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1148583Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1149125Z 2025-08-14T22:00:29.1149229Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1149496Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1149777Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1150239Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1150654Z return mod(**inputs) 2025-08-14T22:00:29.1151117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1151667Z outputs = self.model( 2025-08-14T22:00:29.1152251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1152837Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1153313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1153805Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1154237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1154677Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1155178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T22:00:29.1155730Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1156217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1156632Z return self.act(input) 2025-08-14T22:00:29.1156778Z 2025-08-14T22:00:29.1156876Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1157129Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1157367Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1157614Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1157854Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1158101Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1158396Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1158717Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1159002Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1159446Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1159857Z return mod(**inputs) 2025-08-14T22:00:29.1160379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1160956Z outputs = self.model( 2025-08-14T22:00:29.1161504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1162003Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1162492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1162976Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1163406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1163862Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1164404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1164917Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1165431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1165957Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1166508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1167107Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1167349Z 2025-08-14T22:00:29.1167478Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1167925Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1168320Z return mod(**inputs) 2025-08-14T22:00:29.1168787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1169304Z outputs = self.model( 2025-08-14T22:00:29.1169777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1170303Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1170788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1171286Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1171708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1172166Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1172679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1177426Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1177931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1178458Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1179013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1179587Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1179794Z 2025-08-14T22:00:29.1179893Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1180153Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1180443Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1180879Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1181282Z return mod(**inputs) 2025-08-14T22:00:29.1181743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1182238Z outputs = self.model( 2025-08-14T22:00:29.1182724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1183220Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1183701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1184192Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1184619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1185070Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1185569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T22:00:29.1186132Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1186620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1187046Z return self.act(input) 2025-08-14T22:00:29.1187206Z 2025-08-14T22:00:29.1187327Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1187656Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1187910Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1188157Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1188397Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1188645Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1188891Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1189133Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1189411Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1189865Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1190288Z return mod(**inputs) 2025-08-14T22:00:29.1190755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1191246Z outputs = self.model( 2025-08-14T22:00:29.1191759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1192280Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1192774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1193269Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1193691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1194143Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1194642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1195157Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1195661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1196187Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1196735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1197332Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1197564Z 2025-08-14T22:00:29.1197691Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1198140Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1198544Z return mod(**inputs) 2025-08-14T22:00:29.1199000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1199491Z outputs = self.model( 2025-08-14T22:00:29.1199986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1200476Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1200950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1201534Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1202017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1206700Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1207194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1207737Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1208255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1208768Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1209322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1209898Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1210100Z 2025-08-14T22:00:29.1210207Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1210457Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1210743Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1211188Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1211585Z return mod(**inputs) 2025-08-14T22:00:29.1212048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1212576Z outputs = self.model( 2025-08-14T22:00:29.1213038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1213550Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1214042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1214537Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1214969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1215418Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1215925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T22:00:29.1216529Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1217085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1217518Z return self.act(input) 2025-08-14T22:00:29.1217664Z 2025-08-14T22:00:29.1217764Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1218020Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1218260Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1218511Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1218759Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1218994Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1219238Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1219483Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1219758Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1220201Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1220667Z return mod(**inputs) 2025-08-14T22:00:29.1221134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1221615Z outputs = self.model( 2025-08-14T22:00:29.1222112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1222615Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1223089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1223576Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1224001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1224448Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1224979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1225493Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1226007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1226536Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1227086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1227682Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1227913Z 2025-08-14T22:00:29.1228050Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1228492Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1228899Z return mod(**inputs) 2025-08-14T22:00:29.1229368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1229884Z outputs = self.model( 2025-08-14T22:00:29.1230344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1230899Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1235659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1236147Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1236581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1237038Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1237547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T22:00:29.1238062Z hidden_states, attn_weights = self.self_attn( 2025-08-14T22:00:29.1238591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1239119Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1239682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1240246Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1240460Z 2025-08-14T22:00:29.1240558Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1240814Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1241094Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1241619Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1242020Z return mod(**inputs) 2025-08-14T22:00:29.1242485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1242975Z outputs = self.model( 2025-08-14T22:00:29.1243471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T22:00:29.1243972Z encoder_outputs = self.encoder( 2025-08-14T22:00:29.1244462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T22:00:29.1244965Z layer_outputs = encoder_layer( 2025-08-14T22:00:29.1245457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1245982Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1246475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T22:00:29.1247045Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1247530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1247948Z return self.act(input) 2025-08-14T22:00:29.1248090Z 2025-08-14T22:00:29.1248190Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1248446Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1249004Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1249246Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1249627Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1249896Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1250134Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1250373Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1250655Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1251099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1251494Z return mod(**inputs) 2025-08-14T22:00:29.1252015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1252500Z outputs = self.model( 2025-08-14T22:00:29.1252963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1253489Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1253986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1254483Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1254911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1255364Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1255863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1256381Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1256903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1257421Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1257972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1258567Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1258811Z 2025-08-14T22:00:29.1258941Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1259381Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1259832Z return mod(**inputs) 2025-08-14T22:00:29.1268707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1269365Z outputs = self.model( 2025-08-14T22:00:29.1269980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1270684Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1271282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1271778Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1272208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1272653Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1273152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1273678Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1274291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1274880Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1275437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1276009Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1276213Z 2025-08-14T22:00:29.1276310Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1276562Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1276812Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1277057Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1277292Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1277537Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1277779Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1278017Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1278355Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1278796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1279194Z return mod(**inputs) 2025-08-14T22:00:29.1279678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1280165Z outputs = self.model( 2025-08-14T22:00:29.1280626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1281110Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1281717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1282216Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1282692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1283144Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1283642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1284176Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1284702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1285219Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1285772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1286374Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1286608Z 2025-08-14T22:00:29.1286741Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1287189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1287595Z return mod(**inputs) 2025-08-14T22:00:29.1288082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1288577Z outputs = self.model( 2025-08-14T22:00:29.1289161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1289661Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1290141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1290634Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1291066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1291539Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1292035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1292576Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1293115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1293636Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1294188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1294756Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1294959Z 2025-08-14T22:00:29.1295062Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1311269Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1311657Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1312354Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1312896Z return mod(**inputs) 2025-08-14T22:00:29.1313563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1314281Z outputs = self.model( 2025-08-14T22:00:29.1314749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1315249Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1315750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1316251Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1316696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1317162Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1317686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T22:00:29.1318392Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1318890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1319329Z return self.act(input) 2025-08-14T22:00:29.1319471Z 2025-08-14T22:00:29.1319582Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1319836Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1320100Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1320357Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1320612Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1320872Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1321118Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1321417Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1321706Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1322160Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1322592Z return mod(**inputs) 2025-08-14T22:00:29.1323071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1323570Z outputs = self.model( 2025-08-14T22:00:29.1324065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1324598Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1325095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1325592Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1326061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1326512Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1327017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1327549Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1328072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1328600Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1329159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1329766Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1330002Z 2025-08-14T22:00:29.1330138Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1330631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1331037Z return mod(**inputs) 2025-08-14T22:00:29.1331505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1332022Z outputs = self.model( 2025-08-14T22:00:29.1332544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1337242Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1337723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1338221Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1338660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1339125Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1339626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1340157Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1340682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1341208Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1341762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1342344Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1342553Z 2025-08-14T22:00:29.1342665Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1342918Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1343177Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1343432Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1343677Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1343930Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1344176Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1344455Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1344739Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1345193Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1345599Z return mod(**inputs) 2025-08-14T22:00:29.1346062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1346567Z outputs = self.model( 2025-08-14T22:00:29.1347147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1347680Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1348162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1348656Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1349439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1349894Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1350403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1351060Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1351637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1352157Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1352725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1353406Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1353641Z 2025-08-14T22:00:29.1353785Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1354231Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1354681Z return mod(**inputs) 2025-08-14T22:00:29.1355152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1355638Z outputs = self.model( 2025-08-14T22:00:29.1356105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1356606Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1357093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1357583Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1358022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1358475Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1358985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1359515Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1360054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1360577Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1361125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1365984Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1366197Z 2025-08-14T22:00:29.1366299Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1366556Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1366881Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1367337Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1367746Z return mod(**inputs) 2025-08-14T22:00:29.1368208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1368704Z outputs = self.model( 2025-08-14T22:00:29.1369165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1369666Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1370181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1370676Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1371114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1371556Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1372065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T22:00:29.1372623Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1373120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1373542Z return self.act(input) 2025-08-14T22:00:29.1373688Z 2025-08-14T22:00:29.1374009Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1374267Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1374521Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1374801Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1375059Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1375321Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1375564Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1375867Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1376259Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1376711Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1377109Z return mod(**inputs) 2025-08-14T22:00:29.1377575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1378076Z outputs = self.model( 2025-08-14T22:00:29.1378541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1379039Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1379531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1380076Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1380508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1380967Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1381467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1381984Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1382509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1383029Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1383588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1384191Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1384433Z 2025-08-14T22:00:29.1384591Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1385040Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1385454Z return mod(**inputs) 2025-08-14T22:00:29.1385914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1386404Z outputs = self.model( 2025-08-14T22:00:29.1386871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1387364Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1387878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1388381Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1388815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1389269Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1389776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1390353Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1395116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1395633Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1396193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1396772Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1397006Z 2025-08-14T22:00:29.1397105Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1397371Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1397630Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1397881Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1398150Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1398399Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1398646Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1398886Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1399169Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1399626Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1400035Z return mod(**inputs) 2025-08-14T22:00:29.1400506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1401003Z outputs = self.model( 2025-08-14T22:00:29.1401554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1402042Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1402537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1403034Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1403464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1403927Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1404431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1405012Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1405611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1406131Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1406713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1407318Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1407554Z 2025-08-14T22:00:29.1407685Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1408145Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1408540Z return mod(**inputs) 2025-08-14T22:00:29.1409051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1409542Z outputs = self.model( 2025-08-14T22:00:29.1410041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1410534Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1411021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1411519Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1411952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1412395Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1412900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1413436Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1413967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1414509Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1415066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1415639Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1415866Z 2025-08-14T22:00:29.1415967Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1416216Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1416494Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1416934Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1417334Z return mod(**inputs) 2025-08-14T22:00:29.1417797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1418290Z outputs = self.model( 2025-08-14T22:00:29.1418750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1419295Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1428300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1428967Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1429522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1430120Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1430788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T22:00:29.1431338Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1431826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1432259Z return self.act(input) 2025-08-14T22:00:29.1432402Z 2025-08-14T22:00:29.1432506Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1432750Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1433026Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1433123Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1433228Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1433320Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1433412Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1433512Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1433647Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1433955Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1434039Z return mod(**inputs) 2025-08-14T22:00:29.1436536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1436631Z outputs = self.model( 2025-08-14T22:00:29.1436968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1437062Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1437405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1437498Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1437786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1437925Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1438258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1438393Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1438727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1438876Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1439259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1439442Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1439455Z 2025-08-14T22:00:29.1439597Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1439851Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1439934Z return mod(**inputs) 2025-08-14T22:00:29.1440276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1440361Z outputs = self.model( 2025-08-14T22:00:29.1440701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1440797Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1441132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1441235Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1441607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1441705Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1442095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1442220Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1442568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1442691Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1443060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1443233Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1443248Z 2025-08-14T22:00:29.1443345Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1443447Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1443540Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1443631Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1443729Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1443819Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1443910Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1444008Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1444137Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1444411Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1444500Z return mod(**inputs) 2025-08-14T22:00:29.1444836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1444926Z outputs = self.model( 2025-08-14T22:00:29.1445255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1445346Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1445683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1445773Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1446058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1446158Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1446511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1446651Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1446985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1447124Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1447498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1447659Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1447671Z 2025-08-14T22:00:29.1447808Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1448058Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1448144Z return mod(**inputs) 2025-08-14T22:00:29.1448553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1449020Z outputs = self.model( 2025-08-14T22:00:29.1449369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1449462Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1449796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1449894Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1450175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1450273Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1450612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1450745Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1451141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1451265Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1451633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1451772Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1451785Z 2025-08-14T22:00:29.1451884Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1451989Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1452117Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1452365Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1452489Z return mod(**inputs) 2025-08-14T22:00:29.1452833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1452919Z outputs = self.model( 2025-08-14T22:00:29.1453263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1453354Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1453699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1453789Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1454068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1454178Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1454512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T22:00:29.1454690Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1454970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1455057Z return self.act(input) 2025-08-14T22:00:29.1455102Z 2025-08-14T22:00:29.1455205Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1455297Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1455394Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1455496Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1455587Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1455679Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1455781Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1455875Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1456013Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1456265Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1456350Z return mod(**inputs) 2025-08-14T22:00:29.1456691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1456778Z outputs = self.model( 2025-08-14T22:00:29.1457111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1457211Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1457544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1457644Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1457924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1458025Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1458365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1458489Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1458857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1458992Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1459362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1459535Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1459547Z 2025-08-14T22:00:29.1459673Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1459924Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1460034Z return mod(**inputs) 2025-08-14T22:00:29.1460371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1460462Z outputs = self.model( 2025-08-14T22:00:29.1460795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1460887Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1461226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1461315Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1461597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1461709Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1462042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1462191Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1462522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1462640Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1467312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1467452Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1467465Z 2025-08-14T22:00:29.1467570Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1467663Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1467756Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1467860Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1467953Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1468047Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1468150Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1468241Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1468367Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1468625Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1468710Z return mod(**inputs) 2025-08-14T22:00:29.1469053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1469137Z outputs = self.model( 2025-08-14T22:00:29.1469471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1469569Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1469899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1469993Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1470277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1470402Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1470743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1470875Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1471203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1471326Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1471692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1471882Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1471898Z 2025-08-14T22:00:29.1472025Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1472274Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1472365Z return mod(**inputs) 2025-08-14T22:00:29.1472700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1472790Z outputs = self.model( 2025-08-14T22:00:29.1473123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1473214Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1473557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1473649Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1473931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1474059Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1474393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1474554Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1474889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1475006Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1475382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1475516Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1475528Z 2025-08-14T22:00:29.1475633Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1475733Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1475860Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1476122Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1476203Z return mod(**inputs) 2025-08-14T22:00:29.1476538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1476629Z outputs = self.model( 2025-08-14T22:00:29.1476963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1477065Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1477453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1477548Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1477912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1478016Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1478368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T22:00:29.1478529Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1478803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1478897Z return self.act(input) 2025-08-14T22:00:29.1478910Z 2025-08-14T22:00:29.1479006Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1479099Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1479198Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1479289Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1479381Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1479504Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1479598Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1479696Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1479824Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1480076Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1480171Z return mod(**inputs) 2025-08-14T22:00:29.1480507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1480592Z outputs = self.model( 2025-08-14T22:00:29.1480940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1481032Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1481435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1481556Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1481836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1481947Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1482310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1482438Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1482781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1482900Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1483280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1483499Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1483514Z 2025-08-14T22:00:29.1483643Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1483900Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1483984Z return mod(**inputs) 2025-08-14T22:00:29.1484325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1484408Z outputs = self.model( 2025-08-14T22:00:29.1484739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1484837Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1485165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1485254Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1485544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1485643Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1486004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1486128Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1486456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1486580Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1486945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1487088Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1487101Z 2025-08-14T22:00:29.1487194Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1487309Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1487412Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1487503Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1487596Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1487697Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1487789Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1487880Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1488012Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1488261Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1488347Z return mod(**inputs) 2025-08-14T22:00:29.1488679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1488762Z outputs = self.model( 2025-08-14T22:00:29.1489099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1489211Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1489546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1489677Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1489958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1490063Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1490392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1490524Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1490861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1490982Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1491357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1491523Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1491537Z 2025-08-14T22:00:29.1491676Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1491979Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1496296Z return mod(**inputs) 2025-08-14T22:00:29.1496638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1496729Z outputs = self.model( 2025-08-14T22:00:29.1497062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1497164Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1497498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1497590Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1497901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1498004Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1498354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1498487Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1498818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1498946Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1499359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1499499Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1499526Z 2025-08-14T22:00:29.1499630Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1499728Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1499865Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1500114Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1500196Z return mod(**inputs) 2025-08-14T22:00:29.1500539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1500622Z outputs = self.model( 2025-08-14T22:00:29.1500960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1501051Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1501408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1501503Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1501784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1501908Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1502249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T22:00:29.1502397Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1502673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1502759Z return self.act(input) 2025-08-14T22:00:29.1502772Z 2025-08-14T22:00:29.1502879Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1502988Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1503082Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1503176Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1503275Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1503368Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1503471Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1503566Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1503694Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1503947Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1504030Z return mod(**inputs) 2025-08-14T22:00:29.1504365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1504457Z outputs = self.model( 2025-08-14T22:00:29.1504791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1504889Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1505239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1505330Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1505615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1505713Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1506041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1506190Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1506553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1506778Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1507147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1507310Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1507325Z 2025-08-14T22:00:29.1507464Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1507711Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1507799Z return mod(**inputs) 2025-08-14T22:00:29.1508135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1508222Z outputs = self.model( 2025-08-14T22:00:29.1508558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1508649Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1509005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1509098Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1509383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1509511Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1509842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1509962Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1510306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1510472Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1510848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1510983Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1510996Z 2025-08-14T22:00:29.1511095Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1511195Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1511293Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1511385Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1511485Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1511576Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1511667Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1511766Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1511891Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1512145Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1512226Z return mod(**inputs) 2025-08-14T22:00:29.1512564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1512655Z outputs = self.model( 2025-08-14T22:00:29.1513008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1513105Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1513448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1513537Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1513825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1513924Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1514273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1514416Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1514747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1514876Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1515245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1515407Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1515420Z 2025-08-14T22:00:29.1515553Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1515801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1515883Z return mod(**inputs) 2025-08-14T22:00:29.1516228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1516336Z outputs = self.model( 2025-08-14T22:00:29.1516682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1516779Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1517134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1517234Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1517514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1517619Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1517946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1518078Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1518420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1518541Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1518906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1519049Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1519062Z 2025-08-14T22:00:29.1519157Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1519264Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1519389Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1519635Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1519724Z return mod(**inputs) 2025-08-14T22:00:29.1520058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1520144Z outputs = self.model( 2025-08-14T22:00:29.1520480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1520594Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1520993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1525307Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1525596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1525710Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1526041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T22:00:29.1526203Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1526499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1526589Z return self.act(input) 2025-08-14T22:00:29.1526602Z 2025-08-14T22:00:29.1526712Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1526807Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1526901Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1527006Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1527099Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1527199Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1527291Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1527382Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1527518Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1527764Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1527846Z return mod(**inputs) 2025-08-14T22:00:29.1528226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1528309Z outputs = self.model( 2025-08-14T22:00:29.1528642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1528760Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1529091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1529186Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1529462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1529561Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1529897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1530020Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1530354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1530478Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1530844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1531015Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1531028Z 2025-08-14T22:00:29.1531153Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1531411Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1531494Z return mod(**inputs) 2025-08-14T22:00:29.1531829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1531919Z outputs = self.model( 2025-08-14T22:00:29.1532248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1532358Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1532702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1532791Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1533075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1533174Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1533505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1533634Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1533984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1534105Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1534482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1534614Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1534626Z 2025-08-14T22:00:29.1534728Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1534823Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1534916Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1535013Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1535105Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1535226Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1535340Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1535435Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1535661Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1535912Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1535996Z return mod(**inputs) 2025-08-14T22:00:29.1536339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1536450Z outputs = self.model( 2025-08-14T22:00:29.1536786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1536884Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1537212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1537308Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1537590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1537689Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1538030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1538163Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1538499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1538621Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1538987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1539158Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1539171Z 2025-08-14T22:00:29.1539301Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1539605Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1539698Z return mod(**inputs) 2025-08-14T22:00:29.1540056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1540149Z outputs = self.model( 2025-08-14T22:00:29.1540482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1540572Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1540912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1541002Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1541287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1541407Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1541740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1541880Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1542207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1542326Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1542703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1542831Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1542844Z 2025-08-14T22:00:29.1542952Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1543046Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1543174Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1543458Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1543540Z return mod(**inputs) 2025-08-14T22:00:29.1543875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1543987Z outputs = self.model( 2025-08-14T22:00:29.1544318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1544416Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1544748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1544842Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1545131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1545231Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1545568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T22:00:29.1545721Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1545990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1546083Z return self.act(input) 2025-08-14T22:00:29.1546096Z 2025-08-14T22:00:29.1546194Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1546287Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1546387Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1546480Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1546577Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1546668Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1546759Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1546857Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1546983Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1547232Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1547366Z return mod(**inputs) 2025-08-14T22:00:29.1547701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1547785Z outputs = self.model( 2025-08-14T22:00:29.1548125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1548215Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1548546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1548632Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1549373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1549489Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1549877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1550010Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1554512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1554632Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1555007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1555168Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1555181Z 2025-08-14T22:00:29.1555318Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1555569Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1555694Z return mod(**inputs) 2025-08-14T22:00:29.1556040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1557024Z outputs = self.model( 2025-08-14T22:00:29.1557357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1557456Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1557786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1557884Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1558164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1558265Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1558605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1558727Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1559063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1559191Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1559554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1559695Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1559708Z 2025-08-14T22:00:29.1559805Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1559898Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1560000Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1560095Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1560189Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1560291Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1560385Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1560528Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1560659Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1560912Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1561005Z return mod(**inputs) 2025-08-14T22:00:29.1561412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1561499Z outputs = self.model( 2025-08-14T22:00:29.1561843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1561935Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1562313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1562407Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1562691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1562800Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1563129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1563265Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1563603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1563726Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1564102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1564340Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1564354Z 2025-08-14T22:00:29.1564485Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1564827Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1564932Z return mod(**inputs) 2025-08-14T22:00:29.1565273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1565356Z outputs = self.model( 2025-08-14T22:00:29.1565687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1565784Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1566117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1566214Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1566495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1566595Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1566934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1567063Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1567390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1567513Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1567879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1568024Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1568038Z 2025-08-14T22:00:29.1568135Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1568235Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1568442Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1568694Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1568776Z return mod(**inputs) 2025-08-14T22:00:29.1569121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1569204Z outputs = self.model( 2025-08-14T22:00:29.1569538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1569627Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1569976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1570073Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1570350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1570457Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1570787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T22:00:29.1570937Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1571215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1571300Z return self.act(input) 2025-08-14T22:00:29.1571313Z 2025-08-14T22:00:29.1571410Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1571510Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1571601Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1571698Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1571810Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1571902Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1572003Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1572094Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1572244Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1572502Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1572583Z return mod(**inputs) 2025-08-14T22:00:29.1572913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1573003Z outputs = self.model( 2025-08-14T22:00:29.1573332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1573427Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1573760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1573852Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1574135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1574236Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1574575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1574700Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1575030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1575155Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1575520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1575683Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1575703Z 2025-08-14T22:00:29.1575849Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1576098Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1576188Z return mod(**inputs) 2025-08-14T22:00:29.1576519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1576603Z outputs = self.model( 2025-08-14T22:00:29.1576941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1577032Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1577389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1577481Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1577762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1577869Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1578197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1578319Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1578661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1578826Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1587667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1587824Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1587869Z 2025-08-14T22:00:29.1587975Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1588088Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1588187Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1588291Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1588424Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1588521Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1588627Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1588723Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1588871Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1589203Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1589291Z return mod(**inputs) 2025-08-14T22:00:29.1589746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1589848Z outputs = self.model( 2025-08-14T22:00:29.1590309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1590417Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1590875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1590969Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1591254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1591351Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1591680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1591821Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1592153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1592279Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1592674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1592835Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1592848Z 2025-08-14T22:00:29.1592991Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1593286Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1593382Z return mod(**inputs) 2025-08-14T22:00:29.1595849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1595936Z outputs = self.model( 2025-08-14T22:00:29.1596304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1596402Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1596735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1596831Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1597108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1597213Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1597584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1597713Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1598049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1598167Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1598567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1598698Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1598732Z 2025-08-14T22:00:29.1598829Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1598929Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1599056Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1599304Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1599395Z return mod(**inputs) 2025-08-14T22:00:29.1599728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1599817Z outputs = self.model( 2025-08-14T22:00:29.1600150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1600241Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1600579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1600668Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1600954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1601050Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1601489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T22:00:29.1601649Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1601918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1602005Z return self.act(input) 2025-08-14T22:00:29.1602019Z 2025-08-14T22:00:29.1602126Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1602218Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1602317Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1602449Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1602543Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1602642Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1602732Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1602825Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1602957Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1603204Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1603286Z return mod(**inputs) 2025-08-14T22:00:29.1603643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1603735Z outputs = self.model( 2025-08-14T22:00:29.1604071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1604163Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1604494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1604592Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1604871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1604974Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1605300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1605423Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1605763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1605902Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1606268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1606459Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1606472Z 2025-08-14T22:00:29.1606598Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1606853Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1606935Z return mod(**inputs) 2025-08-14T22:00:29.1607267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1607357Z outputs = self.model( 2025-08-14T22:00:29.1607719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1607839Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1608250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1608344Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1608634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1608732Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1609061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1609190Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1609520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1609650Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1610064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1610222Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1610237Z 2025-08-14T22:00:29.1610343Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1610441Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1610542Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1610633Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1610726Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1610825Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1610915Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1611005Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1611140Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1611409Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1611493Z return mod(**inputs) 2025-08-14T22:00:29.1611842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1611927Z outputs = self.model( 2025-08-14T22:00:29.1612271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1612362Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1612693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1612794Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1613073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1613171Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1613512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1613685Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1614028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1614169Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1614534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1614700Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1614712Z 2025-08-14T22:00:29.1614840Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1615095Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1615181Z return mod(**inputs) 2025-08-14T22:00:29.1615516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1615609Z outputs = self.model( 2025-08-14T22:00:29.1615940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1616030Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1616366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1616454Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1616739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1616836Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1617169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1617309Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1617659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1617781Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1618154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1618282Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1618295Z 2025-08-14T22:00:29.1618396Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1618489Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1618613Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1618869Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1618971Z return mod(**inputs) 2025-08-14T22:00:29.1619319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1619404Z outputs = self.model( 2025-08-14T22:00:29.1619736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1619833Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1620160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1620254Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1620533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1620630Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1620965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T22:00:29.1621135Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1621402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1621499Z return self.act(input) 2025-08-14T22:00:29.1621534Z 2025-08-14T22:00:29.1621630Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1621731Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1621823Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1621915Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1622015Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1622106Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1622222Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1622356Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1622483Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1626983Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1627075Z return mod(**inputs) 2025-08-14T22:00:29.1627411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1627504Z outputs = self.model( 2025-08-14T22:00:29.1627833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1627927Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1628265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1628353Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1628639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1628737Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1629068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1629201Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1629560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1629683Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1630056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1630221Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1630234Z 2025-08-14T22:00:29.1630371Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1630618Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1630724Z return mod(**inputs) 2025-08-14T22:00:29.1631070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1631153Z outputs = self.model( 2025-08-14T22:00:29.1631492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1631584Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1631913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1632008Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1632286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1632382Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1632724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T22:00:29.1632873Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:29.1633216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1633332Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1633736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1633877Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1633890Z 2025-08-14T22:00:29.1633990Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1634094Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1634188Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1634280Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1634378Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1634472Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1634566Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1634666Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1634794Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1635045Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1635136Z return mod(**inputs) 2025-08-14T22:00:29.1635468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1635557Z outputs = self.model( 2025-08-14T22:00:29.1635886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1635975Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1636315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1636406Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1636705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1636855Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1637258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1637398Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1637728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1637845Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1638218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T22:00:29.1638398Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:00:29.1638414Z 2025-08-14T22:00:29.1638551Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1638799Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1638881Z return mod(**inputs) 2025-08-14T22:00:29.1639222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1639304Z outputs = self.model( 2025-08-14T22:00:29.1639635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1639730Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1640060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1640154Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1640433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1640553Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1640942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T22:00:29.1641097Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T22:00:29.1641525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T22:00:29.1641645Z attn_output, attn_weights = attention_interface( 2025-08-14T22:00:29.1642012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T22:00:29.1642148Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:00:29.1642160Z 2025-08-14T22:00:29.1642254Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1642350Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1642486Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1642736Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1642827Z return mod(**inputs) 2025-08-14T22:00:29.1643160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T22:00:29.1643243Z outputs = self.model( 2025-08-14T22:00:29.1643581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T22:00:29.1643674Z decoder_outputs = self.decoder( 2025-08-14T22:00:29.1644008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T22:00:29.1644104Z layer_outputs = decoder_layer( 2025-08-14T22:00:29.1644387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:29.1644494Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:29.1644851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T22:00:29.1644998Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:29.1645272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:29.1645357Z return self.act(input) 2025-08-14T22:00:29.1645370Z 2025-08-14T22:00:29.1645476Z cudagraph partition due to non gpu ops 2025-08-14T22:00:29.1645603Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1645848Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1645936Z return mod(**inputs) 2025-08-14T22:00:29.1646321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1489, in forward 2025-08-14T22:00:29.1646472Z lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias 2025-08-14T22:00:29.1646492Z 2025-08-14T22:00:29.1646621Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:29.1646872Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:29.1646961Z return mod(**inputs) 2025-08-14T22:00:29.1647295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1494, in forward 2025-08-14T22:00:29.1647501Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T22:00:29.1647514Z 2025-08-14T22:00:40.9646816Z Compilation time (from dynamo_timed): 40.615093084 2025-08-14T22:00:40.9665410Z pass 2025-08-14T22:00:40.9667209Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:00:40.9672726Z TIMING: _recursive_pre_grad_passes:0.13022 _recursive_joint_graph_passes:1.44696 _recursive_post_grad_passes:0.21977 async_compile.wait:0.91963 code_gen:10.37579 inductor_compile:17.05285 backend_compile:32.76552 gc:0.00103 entire_frame_compile:40.61509 total_wall_time:40.61509 2025-08-14T22:00:40.9674037Z STATS: call_* op count: 965 | FakeTensorMode.__torch_dispatch__:63082 | FakeTensor.__torch_dispatch__:9680 | ProxyTorchDispatchMode.__torch_dispatch__:13875 2025-08-14T22:00:40.9674678Z Dynamo produced 1 graphs covering 965 ops with 0 graph breaks (0 unique) 2025-08-14T22:00:47.9319472Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T22:00:47.9320587Z from pkg_resources import resource_filename 2025-08-14T22:00:48.7443740Z 2025-08-14T22:00:48.7599474Z loading model: 0it [00:00, ?it/s]If you want to use `RobertaLMHeadModel` as a standalone, add `is_decoder=True.` 2025-08-14T22:00:48.7600462Z WARNING:transformers.models.roberta.modeling_roberta:If you want to use `RobertaLMHeadModel` as a standalone, add `is_decoder=True.` 2025-08-14T22:00:50.8765625Z We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked. 2025-08-14T22:00:50.8767320Z You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded. 2025-08-14T22:00:50.8769087Z WARNING:transformers.modeling_utils:We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked. 2025-08-14T22:00:50.8770940Z You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded. 2025-08-14T22:00:51.1902842Z 2025-08-14T22:00:51.1919732Z loading model: 0it [00:02, ?it/s] 2025-08-14T22:00:51.1920254Z cpu eval RobertaForCausalLM 2025-08-14T22:00:52.2283577Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:00:52.7607820Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:00:53.2985205Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:01:07.9479773Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9480332Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9481128Z return mod(**inputs) 2025-08-14T22:01:07.9482011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9482861Z outputs = self.roberta( 2025-08-14T22:01:07.9483602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-08-14T22:01:07.9484136Z embedding_output = self.embeddings( 2025-08-14T22:01:07.9484648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-08-14T22:01:07.9485308Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T22:01:07.9486064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1576, in create_position_ids_from_input_ids 2025-08-14T22:01:07.9486668Z mask = input_ids.ne(padding_idx).int() 2025-08-14T22:01:07.9486946Z 2025-08-14T22:01:07.9487060Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9487311Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9487568Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9487821Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9488116Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9488365Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9488616Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9488864Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9489104Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9489354Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9489603Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9489840Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9490165Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9490634Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9491048Z return mod(**inputs) 2025-08-14T22:01:07.9491510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9492011Z outputs = self.roberta( 2025-08-14T22:01:07.9492489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-08-14T22:01:07.9493007Z embedding_output = self.embeddings( 2025-08-14T22:01:07.9493506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-08-14T22:01:07.9494170Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T22:01:07.9494924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1577, in create_position_ids_from_input_ids 2025-08-14T22:01:07.9495661Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T22:01:07.9495973Z 2025-08-14T22:01:07.9496112Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9496652Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9497069Z return mod(**inputs) 2025-08-14T22:01:07.9497606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9502385Z outputs = self.roberta( 2025-08-14T22:01:07.9502859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-08-14T22:01:07.9503364Z embedding_output = self.embeddings( 2025-08-14T22:01:07.9503858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-08-14T22:01:07.9504554Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T22:01:07.9505301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1577, in create_position_ids_from_input_ids 2025-08-14T22:01:07.9506035Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T22:01:07.9506341Z 2025-08-14T22:01:07.9506446Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9506762Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9507109Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9507490Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9507886Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9508181Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9508498Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9508866Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9509437Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9509996Z return mod(**inputs) 2025-08-14T22:01:07.9510570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9511262Z outputs = self.roberta( 2025-08-14T22:01:07.9511878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9512456Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9512941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9513429Z layer_outputs = layer_module( 2025-08-14T22:01:07.9513870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9514323Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9514829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:07.9515343Z self_attention_outputs = self.attention( 2025-08-14T22:01:07.9515833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9516307Z return func(*args, **kwargs) 2025-08-14T22:01:07.9516788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:07.9517280Z self_outputs = self.self( 2025-08-14T22:01:07.9517726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9518187Z return func(*args, **kwargs) 2025-08-14T22:01:07.9518663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:07.9519233Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:07.9519467Z 2025-08-14T22:01:07.9519566Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9519848Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9520191Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9520781Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9521350Z return mod(**inputs) 2025-08-14T22:01:07.9521922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9522557Z outputs = self.roberta( 2025-08-14T22:01:07.9523112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9523767Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9524424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9525064Z layer_outputs = layer_module( 2025-08-14T22:01:07.9525549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9526154Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9530963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:07.9531476Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:07.9531988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:07.9532482Z return forward_fn(*input_tensors) 2025-08-14T22:01:07.9533021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:07.9533661Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:07.9534221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:07.9534864Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:07.9535386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:07.9535815Z return self.act(input) 2025-08-14T22:01:07.9535960Z 2025-08-14T22:01:07.9536058Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9536312Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9536564Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9536814Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9537058Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9537295Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9537540Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9537784Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9538055Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9538507Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9538917Z return mod(**inputs) 2025-08-14T22:01:07.9539377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9539860Z outputs = self.roberta( 2025-08-14T22:01:07.9540329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9540870Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9541430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9541914Z layer_outputs = layer_module( 2025-08-14T22:01:07.9542353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9542803Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9543321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:07.9543831Z self_attention_outputs = self.attention( 2025-08-14T22:01:07.9544314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9544779Z return func(*args, **kwargs) 2025-08-14T22:01:07.9545264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:07.9545752Z self_outputs = self.self( 2025-08-14T22:01:07.9546214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9546688Z return func(*args, **kwargs) 2025-08-14T22:01:07.9547161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:07.9547729Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:07.9547966Z 2025-08-14T22:01:07.9548064Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9548320Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9549074Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9549542Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9549937Z return mod(**inputs) 2025-08-14T22:01:07.9550401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9550889Z outputs = self.roberta( 2025-08-14T22:01:07.9551348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9551915Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9552395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9552926Z layer_outputs = layer_module( 2025-08-14T22:01:07.9553347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9553805Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9554300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:07.9554795Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:07.9555334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:07.9559994Z return forward_fn(*input_tensors) 2025-08-14T22:01:07.9560529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:07.9561119Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:07.9561743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:07.9562283Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:07.9562767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:07.9563188Z return self.act(input) 2025-08-14T22:01:07.9563336Z 2025-08-14T22:01:07.9563448Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9563737Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9563983Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9564230Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9564477Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9564718Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9564961Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9565262Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9565550Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9565992Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9566396Z return mod(**inputs) 2025-08-14T22:01:07.9566855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9567337Z outputs = self.roberta( 2025-08-14T22:01:07.9567802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9569398Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9569964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9570551Z layer_outputs = layer_module( 2025-08-14T22:01:07.9570996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9571459Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9572010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:07.9572519Z self_attention_outputs = self.attention( 2025-08-14T22:01:07.9572996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9573460Z return func(*args, **kwargs) 2025-08-14T22:01:07.9573933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:07.9574461Z self_outputs = self.self( 2025-08-14T22:01:07.9574904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9575358Z return func(*args, **kwargs) 2025-08-14T22:01:07.9575830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:07.9576432Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:07.9576668Z 2025-08-14T22:01:07.9576773Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9577019Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9577302Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9577747Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9578149Z return mod(**inputs) 2025-08-14T22:01:07.9578602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9579088Z outputs = self.roberta( 2025-08-14T22:01:07.9579550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9580034Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9580513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9581003Z layer_outputs = layer_module( 2025-08-14T22:01:07.9581431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9581878Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9582371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:07.9582881Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:07.9583371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:07.9583858Z return forward_fn(*input_tensors) 2025-08-14T22:01:07.9584477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:07.9593675Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:07.9594402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:07.9595122Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:07.9595754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:07.9596185Z return self.act(input) 2025-08-14T22:01:07.9596323Z 2025-08-14T22:01:07.9596447Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9596710Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9596961Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9597202Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9597448Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9597697Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9597932Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9598180Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9598457Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9598953Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9601542Z return mod(**inputs) 2025-08-14T22:01:07.9602001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9602498Z outputs = self.roberta( 2025-08-14T22:01:07.9603011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9603537Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9604030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9604539Z layer_outputs = layer_module( 2025-08-14T22:01:07.9604961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9605410Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9605910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:07.9606406Z self_attention_outputs = self.attention( 2025-08-14T22:01:07.9606886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9607355Z return func(*args, **kwargs) 2025-08-14T22:01:07.9607827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:07.9608305Z self_outputs = self.self( 2025-08-14T22:01:07.9608748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9609209Z return func(*args, **kwargs) 2025-08-14T22:01:07.9609673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:07.9610239Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:07.9610478Z 2025-08-14T22:01:07.9610576Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9610830Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9611105Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9611552Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9611948Z return mod(**inputs) 2025-08-14T22:01:07.9612435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9612919Z outputs = self.roberta( 2025-08-14T22:01:07.9613436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9614003Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9614477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9614965Z layer_outputs = layer_module( 2025-08-14T22:01:07.9615397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9615872Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9616367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:07.9616875Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:07.9617373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:07.9617857Z return forward_fn(*input_tensors) 2025-08-14T22:01:07.9618386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:07.9618975Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:07.9619526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:07.9620053Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:07.9620532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:07.9620981Z return self.act(input) 2025-08-14T22:01:07.9621123Z 2025-08-14T22:01:07.9621227Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9621474Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9621747Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9621997Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9622241Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9622483Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9622734Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9622973Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9623255Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9623700Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9624102Z return mod(**inputs) 2025-08-14T22:01:07.9624559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9625049Z outputs = self.roberta( 2025-08-14T22:01:07.9625513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9625999Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9626479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9626963Z layer_outputs = layer_module( 2025-08-14T22:01:07.9627396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9627886Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9632629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:07.9633141Z self_attention_outputs = self.attention( 2025-08-14T22:01:07.9633616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9634086Z return func(*args, **kwargs) 2025-08-14T22:01:07.9634595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:07.9635084Z self_outputs = self.self( 2025-08-14T22:01:07.9635523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9635985Z return func(*args, **kwargs) 2025-08-14T22:01:07.9636457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:07.9637018Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:07.9637251Z 2025-08-14T22:01:07.9637371Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9637623Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9637910Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9638348Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9638751Z return mod(**inputs) 2025-08-14T22:01:07.9639208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9639693Z outputs = self.roberta( 2025-08-14T22:01:07.9640146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9640635Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9641115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9641674Z layer_outputs = layer_module( 2025-08-14T22:01:07.9642107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9642742Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9643244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:07.9643772Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:07.9644278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:07.9644773Z return forward_fn(*input_tensors) 2025-08-14T22:01:07.9645295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:07.9645895Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:07.9646451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:07.9646995Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:07.9647468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:07.9647896Z return self.act(input) 2025-08-14T22:01:07.9648036Z 2025-08-14T22:01:07.9648140Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9648399Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9648648Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9649222Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9649477Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9649719Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9649976Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9650231Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9650507Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9650962Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9651366Z return mod(**inputs) 2025-08-14T22:01:07.9651887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9652366Z outputs = self.roberta( 2025-08-14T22:01:07.9652824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9653312Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9653783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9654270Z layer_outputs = layer_module( 2025-08-14T22:01:07.9654698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9655184Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9655676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:07.9656177Z self_attention_outputs = self.attention( 2025-08-14T22:01:07.9656657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9661339Z return func(*args, **kwargs) 2025-08-14T22:01:07.9661826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:07.9662311Z self_outputs = self.self( 2025-08-14T22:01:07.9662755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9663208Z return func(*args, **kwargs) 2025-08-14T22:01:07.9663680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:07.9664297Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:07.9664533Z 2025-08-14T22:01:07.9664642Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9664891Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9665177Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9665651Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9666045Z return mod(**inputs) 2025-08-14T22:01:07.9666515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9667003Z outputs = self.roberta( 2025-08-14T22:01:07.9667468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9667950Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9668434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9668926Z layer_outputs = layer_module( 2025-08-14T22:01:07.9669350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9669808Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9670305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:07.9670805Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:07.9671343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:07.9671908Z return forward_fn(*input_tensors) 2025-08-14T22:01:07.9672436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:07.9673023Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:07.9673575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:07.9674142Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:07.9674619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:07.9675033Z return self.act(input) 2025-08-14T22:01:07.9675177Z 2025-08-14T22:01:07.9675279Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9675529Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9675775Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9676011Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9676253Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9676494Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9676749Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9676995Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9677275Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9677721Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9678127Z return mod(**inputs) 2025-08-14T22:01:07.9678583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9679066Z outputs = self.roberta( 2025-08-14T22:01:07.9679523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9680009Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9680488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9680965Z layer_outputs = layer_module( 2025-08-14T22:01:07.9681491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9681966Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9682465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:07.9682982Z self_attention_outputs = self.attention( 2025-08-14T22:01:07.9683456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9683919Z return func(*args, **kwargs) 2025-08-14T22:01:07.9684395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:07.9684883Z self_outputs = self.self( 2025-08-14T22:01:07.9685326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9685847Z return func(*args, **kwargs) 2025-08-14T22:01:07.9690560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:07.9691130Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:07.9691369Z 2025-08-14T22:01:07.9691476Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9691736Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9692018Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9692469Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9692869Z return mod(**inputs) 2025-08-14T22:01:07.9693326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9693816Z outputs = self.roberta( 2025-08-14T22:01:07.9694287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9694779Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9695284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9695774Z layer_outputs = layer_module( 2025-08-14T22:01:07.9696200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9696642Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9697137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:07.9697643Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:07.9698145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:07.9698648Z return forward_fn(*input_tensors) 2025-08-14T22:01:07.9699177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:07.9699770Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:07.9700372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:07.9700971Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:07.9701450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:07.9701872Z return self.act(input) 2025-08-14T22:01:07.9702008Z 2025-08-14T22:01:07.9702105Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9702360Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9702610Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9702846Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9703114Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9703358Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9703603Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9703841Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9704122Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9704599Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9704992Z return mod(**inputs) 2025-08-14T22:01:07.9705448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9705934Z outputs = self.roberta( 2025-08-14T22:01:07.9706395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9706875Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9707355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9707844Z layer_outputs = layer_module( 2025-08-14T22:01:07.9708266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9708718Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9709211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:07.9709710Z self_attention_outputs = self.attention( 2025-08-14T22:01:07.9710179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9710645Z return func(*args, **kwargs) 2025-08-14T22:01:07.9711121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:07.9711603Z self_outputs = self.self( 2025-08-14T22:01:07.9712051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9712520Z return func(*args, **kwargs) 2025-08-14T22:01:07.9713014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:07.9713570Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:07.9713813Z 2025-08-14T22:01:07.9713912Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9714166Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9714453Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9714944Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9719588Z return mod(**inputs) 2025-08-14T22:01:07.9720084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9720567Z outputs = self.roberta( 2025-08-14T22:01:07.9721035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9721588Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9722075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9722554Z layer_outputs = layer_module( 2025-08-14T22:01:07.9722982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9723484Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9723974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:07.9724486Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:07.9725008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:07.9725500Z return forward_fn(*input_tensors) 2025-08-14T22:01:07.9726020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:07.9726631Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:07.9727182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:07.9727711Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:07.9728176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:07.9728597Z return self.act(input) 2025-08-14T22:01:07.9728735Z 2025-08-14T22:01:07.9728840Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9729084Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9729379Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9729627Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9729937Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9730179Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9730427Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9730667Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9730941Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9731431Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9731833Z return mod(**inputs) 2025-08-14T22:01:07.9732286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9732773Z outputs = self.roberta( 2025-08-14T22:01:07.9733237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9733733Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9734237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9734728Z layer_outputs = layer_module( 2025-08-14T22:01:07.9735158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9735600Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9736097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:07.9736605Z self_attention_outputs = self.attention( 2025-08-14T22:01:07.9737081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9737573Z return func(*args, **kwargs) 2025-08-14T22:01:07.9738052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:07.9738539Z self_outputs = self.self( 2025-08-14T22:01:07.9738982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9739450Z return func(*args, **kwargs) 2025-08-14T22:01:07.9739924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:07.9740487Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:07.9740720Z 2025-08-14T22:01:07.9740818Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9741075Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9741359Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9741804Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9742226Z return mod(**inputs) 2025-08-14T22:01:07.9742694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9743185Z outputs = self.roberta( 2025-08-14T22:01:07.9743664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9752797Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9753451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9754107Z layer_outputs = layer_module( 2025-08-14T22:01:07.9754653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9755251Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9755772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:07.9756274Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:07.9756774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:07.9757261Z return forward_fn(*input_tensors) 2025-08-14T22:01:07.9757791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:07.9758420Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:07.9761041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:07.9761634Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:07.9762107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:07.9762576Z return self.act(input) 2025-08-14T22:01:07.9762722Z 2025-08-14T22:01:07.9762821Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9763135Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9763378Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9763622Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9763864Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9764101Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9764346Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9764594Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9764875Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9765314Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9765718Z return mod(**inputs) 2025-08-14T22:01:07.9766217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9766704Z outputs = self.roberta( 2025-08-14T22:01:07.9767178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9767675Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9768157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9768643Z layer_outputs = layer_module( 2025-08-14T22:01:07.9769080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9769534Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9770023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:07.9770538Z self_attention_outputs = self.attention( 2025-08-14T22:01:07.9771053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9771521Z return func(*args, **kwargs) 2025-08-14T22:01:07.9771992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:07.9772511Z self_outputs = self.self( 2025-08-14T22:01:07.9773016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9773553Z return func(*args, **kwargs) 2025-08-14T22:01:07.9774030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:07.9774591Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:07.9774826Z 2025-08-14T22:01:07.9774933Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9775188Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9775483Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9775932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9776334Z return mod(**inputs) 2025-08-14T22:01:07.9776789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9777287Z outputs = self.roberta( 2025-08-14T22:01:07.9777753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9778237Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9778717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9779206Z layer_outputs = layer_module( 2025-08-14T22:01:07.9779635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9780088Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9780606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:07.9781113Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:07.9781600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:07.9782087Z return forward_fn(*input_tensors) 2025-08-14T22:01:07.9782616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:07.9783203Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:07.9783768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:07.9784306Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:07.9784776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:07.9785198Z return self.act(input) 2025-08-14T22:01:07.9785337Z 2025-08-14T22:01:07.9785434Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9785684Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9785932Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9786173Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9786420Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9786665Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9786904Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9787149Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9787485Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9792189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9792617Z return mod(**inputs) 2025-08-14T22:01:07.9793082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9793575Z outputs = self.roberta( 2025-08-14T22:01:07.9794056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9794550Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9795035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9795523Z layer_outputs = layer_module( 2025-08-14T22:01:07.9795945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9796399Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9796897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:07.9797397Z self_attention_outputs = self.attention( 2025-08-14T22:01:07.9797878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9798346Z return func(*args, **kwargs) 2025-08-14T22:01:07.9798822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:07.9799305Z self_outputs = self.self( 2025-08-14T22:01:07.9799754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9800215Z return func(*args, **kwargs) 2025-08-14T22:01:07.9800681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:07.9801302Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:07.9801566Z 2025-08-14T22:01:07.9801666Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9801975Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9802349Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9802804Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9803212Z return mod(**inputs) 2025-08-14T22:01:07.9803666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9804154Z outputs = self.roberta( 2025-08-14T22:01:07.9804620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9805112Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9805606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9806096Z layer_outputs = layer_module( 2025-08-14T22:01:07.9806528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9806981Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9807474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:07.9807978Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:07.9808473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:07.9808952Z return forward_fn(*input_tensors) 2025-08-14T22:01:07.9809479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:07.9810067Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:07.9810641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:07.9811177Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:07.9811674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:07.9812096Z return self.act(input) 2025-08-14T22:01:07.9812234Z 2025-08-14T22:01:07.9812340Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9812586Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9812835Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9813081Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9813318Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9813566Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9813815Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9814055Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9814339Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9814791Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9815194Z return mod(**inputs) 2025-08-14T22:01:07.9815649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9816134Z outputs = self.roberta( 2025-08-14T22:01:07.9820874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9821365Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9821858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9822347Z layer_outputs = layer_module( 2025-08-14T22:01:07.9822789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9823237Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9823782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:07.9824302Z self_attention_outputs = self.attention( 2025-08-14T22:01:07.9824780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9834149Z return func(*args, **kwargs) 2025-08-14T22:01:07.9834761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:07.9835279Z self_outputs = self.self( 2025-08-14T22:01:07.9835750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:07.9836308Z return func(*args, **kwargs) 2025-08-14T22:01:07.9836806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:07.9837387Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:07.9837637Z 2025-08-14T22:01:07.9837746Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9838013Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9838297Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9838761Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9839176Z return mod(**inputs) 2025-08-14T22:01:07.9839646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T22:01:07.9840132Z outputs = self.roberta( 2025-08-14T22:01:07.9840611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:07.9841136Z encoder_outputs = self.encoder( 2025-08-14T22:01:07.9841738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:07.9842234Z layer_outputs = layer_module( 2025-08-14T22:01:07.9842704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:07.9843171Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:07.9843674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:07.9844192Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:07.9844704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:07.9845207Z return forward_fn(*input_tensors) 2025-08-14T22:01:07.9850334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:07.9850940Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:07.9851502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:07.9852037Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:07.9852522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:07.9852949Z return self.act(input) 2025-08-14T22:01:07.9853090Z 2025-08-14T22:01:07.9853204Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9853455Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9853772Z cudagraph partition due to non gpu ops 2025-08-14T22:01:07.9854165Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:07.9854614Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:07.9855022Z return mod(**inputs) 2025-08-14T22:01:07.9855561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1022, in forward 2025-08-14T22:01:07.9856063Z lm_loss = self.loss_function( 2025-08-14T22:01:07.9856522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-08-14T22:01:07.9857130Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-08-14T22:01:07.9857742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-08-14T22:01:07.9858376Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-08-14T22:01:07.9858694Z 2025-08-14T22:01:14.7990017Z Compilation time (from dynamo_timed): 19.586922774 2025-08-14T22:01:14.8151576Z pass 2025-08-14T22:01:14.8152989Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:01:14.8154194Z TIMING: _recursive_pre_grad_passes:0.04983 _recursive_joint_graph_passes:0.54335 _recursive_post_grad_passes:0.1086 async_compile.wait:0.97002 code_gen:5.99244 inductor_compile:9.71291 backend_compile:15.88142 gc:0.00111 entire_frame_compile:19.58692 total_wall_time:19.58692 2025-08-14T22:01:14.8155568Z STATS: call_* op count: 303 | FakeTensorMode.__torch_dispatch__:24314 | FakeTensor.__torch_dispatch__:3923 | ProxyTorchDispatchMode.__torch_dispatch__:5359 2025-08-14T22:01:14.8156205Z Dynamo produced 1 graphs covering 303 ops with 0 graph breaks (0 unique) 2025-08-14T22:01:21.2976813Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T22:01:21.2978190Z from pkg_resources import resource_filename 2025-08-14T22:01:22.1579128Z 2025-08-14T22:01:23.8377297Z loading model: 0it [00:00, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked. 2025-08-14T22:01:23.8378751Z You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded. 2025-08-14T22:01:23.8379846Z WARNING:transformers.modeling_utils:We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked. 2025-08-14T22:01:23.8380934Z You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded. 2025-08-14T22:01:24.1010319Z 2025-08-14T22:01:24.1010876Z loading model: 0it [00:01, ?it/s] 2025-08-14T22:01:24.1026233Z cpu eval RobertaForQuestionAnswering 2025-08-14T22:01:24.9347578Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:01:25.3857524Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:01:25.8179546Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:01:40.4159298Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4159969Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4160408Z return mod(**inputs) 2025-08-14T22:01:40.4160940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4161579Z outputs = self.roberta( 2025-08-14T22:01:40.4162295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-08-14T22:01:40.4162812Z embedding_output = self.embeddings( 2025-08-14T22:01:40.4163329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-08-14T22:01:40.4163999Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T22:01:40.4164751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1576, in create_position_ids_from_input_ids 2025-08-14T22:01:40.4165346Z mask = input_ids.ne(padding_idx).int() 2025-08-14T22:01:40.4165532Z 2025-08-14T22:01:40.4165686Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4165954Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4166205Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4166488Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4166733Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4167052Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4175676Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4175960Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4176239Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4176527Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4176819Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4177102Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4177447Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4178042Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4178574Z return mod(**inputs) 2025-08-14T22:01:40.4179268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4179765Z outputs = self.roberta( 2025-08-14T22:01:40.4180238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-08-14T22:01:40.4180797Z embedding_output = self.embeddings( 2025-08-14T22:01:40.4181308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-08-14T22:01:40.4184136Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T22:01:40.4184865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1577, in create_position_ids_from_input_ids 2025-08-14T22:01:40.4185641Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T22:01:40.4185960Z 2025-08-14T22:01:40.4186094Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4186544Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4186940Z return mod(**inputs) 2025-08-14T22:01:40.4187405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4187898Z outputs = self.roberta( 2025-08-14T22:01:40.4188370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-08-14T22:01:40.4188863Z embedding_output = self.embeddings( 2025-08-14T22:01:40.4189353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-08-14T22:01:40.4190010Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T22:01:40.4190751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1577, in create_position_ids_from_input_ids 2025-08-14T22:01:40.4191493Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T22:01:40.4191813Z 2025-08-14T22:01:40.4191916Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4192171Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4192411Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4192660Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4192903Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4193141Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4193387Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4193673Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4194166Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4194566Z return mod(**inputs) 2025-08-14T22:01:40.4195031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4195526Z outputs = self.roberta( 2025-08-14T22:01:40.4196046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4196619Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4197103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4197592Z layer_outputs = layer_module( 2025-08-14T22:01:40.4198017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4198477Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4198977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:40.4199520Z self_attention_outputs = self.attention( 2025-08-14T22:01:40.4199997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4200497Z return func(*args, **kwargs) 2025-08-14T22:01:40.4200976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:40.4201570Z self_outputs = self.self( 2025-08-14T22:01:40.4202018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4202483Z return func(*args, **kwargs) 2025-08-14T22:01:40.4202959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:40.4203524Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:40.4203772Z 2025-08-14T22:01:40.4203872Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4204134Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4204418Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4204877Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4205371Z return mod(**inputs) 2025-08-14T22:01:40.4205843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4206330Z outputs = self.roberta( 2025-08-14T22:01:40.4206799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4207295Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4207776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4208292Z layer_outputs = layer_module( 2025-08-14T22:01:40.4208764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4209219Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4209722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:40.4210227Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:40.4215031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:40.4215516Z return forward_fn(*input_tensors) 2025-08-14T22:01:40.4216049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:40.4216680Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:40.4217224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:40.4217766Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:40.4218255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:40.4218681Z return self.act(input) 2025-08-14T22:01:40.4218819Z 2025-08-14T22:01:40.4218919Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4219175Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4219424Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4219658Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4219903Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4220150Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4220392Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4220636Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4220944Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4221391Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4221784Z return mod(**inputs) 2025-08-14T22:01:40.4222283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4222776Z outputs = self.roberta( 2025-08-14T22:01:40.4223234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4223724Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4224207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4224696Z layer_outputs = layer_module( 2025-08-14T22:01:40.4225170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4225705Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4226217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:40.4226727Z self_attention_outputs = self.attention( 2025-08-14T22:01:40.4227201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4227674Z return func(*args, **kwargs) 2025-08-14T22:01:40.4228149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:40.4228626Z self_outputs = self.self( 2025-08-14T22:01:40.4229077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4229541Z return func(*args, **kwargs) 2025-08-14T22:01:40.4230020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:40.4230582Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:40.4230854Z 2025-08-14T22:01:40.4230960Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4231214Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4231498Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4231942Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4232347Z return mod(**inputs) 2025-08-14T22:01:40.4232812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4233294Z outputs = self.roberta( 2025-08-14T22:01:40.4233790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4234282Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4234759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4235244Z layer_outputs = layer_module( 2025-08-14T22:01:40.4235675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4236124Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4236609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:40.4237112Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:40.4237607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:40.4238101Z return forward_fn(*input_tensors) 2025-08-14T22:01:40.4238618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:40.4239254Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:40.4244091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:40.4244656Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:40.4245135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:40.4245560Z return self.act(input) 2025-08-14T22:01:40.4245698Z 2025-08-14T22:01:40.4245806Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4246051Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4246300Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4246547Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4246786Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4247145Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4247386Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4247628Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4247903Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4248351Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4249140Z return mod(**inputs) 2025-08-14T22:01:40.4249603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4250096Z outputs = self.roberta( 2025-08-14T22:01:40.4250573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4251064Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4251546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4252038Z layer_outputs = layer_module( 2025-08-14T22:01:40.4252549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4252994Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4253497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:40.4254060Z self_attention_outputs = self.attention( 2025-08-14T22:01:40.4254621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4255081Z return func(*args, **kwargs) 2025-08-14T22:01:40.4255561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:40.4256053Z self_outputs = self.self( 2025-08-14T22:01:40.4256534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4257002Z return func(*args, **kwargs) 2025-08-14T22:01:40.4257479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:40.4258043Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:40.4258283Z 2025-08-14T22:01:40.4258382Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4258633Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4258917Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4259350Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4259748Z return mod(**inputs) 2025-08-14T22:01:40.4260212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4260744Z outputs = self.roberta( 2025-08-14T22:01:40.4261199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4261692Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4263059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4263554Z layer_outputs = layer_module( 2025-08-14T22:01:40.4263979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4264430Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4264927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:40.4265429Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:40.4265930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:40.4266424Z return forward_fn(*input_tensors) 2025-08-14T22:01:40.4266949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:40.4267535Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:40.4268082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:40.4268676Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:40.4273340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:40.4273761Z return self.act(input) 2025-08-14T22:01:40.4273910Z 2025-08-14T22:01:40.4274008Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4274266Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4274508Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4274761Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4275006Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4275293Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4275547Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4275793Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4276080Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4276518Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4276981Z return mod(**inputs) 2025-08-14T22:01:40.4277453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4277936Z outputs = self.roberta( 2025-08-14T22:01:40.4278435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4278926Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4279411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4279894Z layer_outputs = layer_module( 2025-08-14T22:01:40.4280324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4280775Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4281324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:40.4281837Z self_attention_outputs = self.attention( 2025-08-14T22:01:40.4282313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4282794Z return func(*args, **kwargs) 2025-08-14T22:01:40.4283381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:40.4283900Z self_outputs = self.self( 2025-08-14T22:01:40.4284351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4284827Z return func(*args, **kwargs) 2025-08-14T22:01:40.4285358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:40.4285922Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:40.4286157Z 2025-08-14T22:01:40.4286264Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4286510Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4286793Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4287239Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4287638Z return mod(**inputs) 2025-08-14T22:01:40.4288101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4288591Z outputs = self.roberta( 2025-08-14T22:01:40.4289053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4289537Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4290023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4290510Z layer_outputs = layer_module( 2025-08-14T22:01:40.4290937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4291382Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4291884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:40.4292388Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:40.4292906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:40.4293392Z return forward_fn(*input_tensors) 2025-08-14T22:01:40.4293916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:40.4294509Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:40.4295047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:40.4295585Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:40.4296080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:40.4296508Z return self.act(input) 2025-08-14T22:01:40.4296648Z 2025-08-14T22:01:40.4296744Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4297002Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4297258Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4297557Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4302033Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4302285Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4302522Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4302769Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4303051Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4303512Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4303910Z return mod(**inputs) 2025-08-14T22:01:40.4304381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4304905Z outputs = self.roberta( 2025-08-14T22:01:40.4305369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4305865Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4306369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4306861Z layer_outputs = layer_module( 2025-08-14T22:01:40.4307293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4307745Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4308241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:40.4308736Z self_attention_outputs = self.attention( 2025-08-14T22:01:40.4309218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4309687Z return func(*args, **kwargs) 2025-08-14T22:01:40.4310164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:40.4310646Z self_outputs = self.self( 2025-08-14T22:01:40.4311096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4311556Z return func(*args, **kwargs) 2025-08-14T22:01:40.4312068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:40.4312701Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:40.4312941Z 2025-08-14T22:01:40.4313039Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4313300Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4313577Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4314023Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4314459Z return mod(**inputs) 2025-08-14T22:01:40.4314918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4315415Z outputs = self.roberta( 2025-08-14T22:01:40.4315882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4316424Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4316910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4317398Z layer_outputs = layer_module( 2025-08-14T22:01:40.4317867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4318324Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4318813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:40.4319321Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:40.4319818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:40.4320301Z return forward_fn(*input_tensors) 2025-08-14T22:01:40.4320830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:40.4321496Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:40.4322047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:40.4322580Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:40.4323083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:40.4323511Z return self.act(input) 2025-08-14T22:01:40.4323655Z 2025-08-14T22:01:40.4323791Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4324041Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4324294Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4324539Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4324784Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4325032Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4325276Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4325516Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4325796Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4326244Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4335084Z return mod(**inputs) 2025-08-14T22:01:40.4335696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4336201Z outputs = self.roberta( 2025-08-14T22:01:40.4336670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4337155Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4337641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4338133Z layer_outputs = layer_module( 2025-08-14T22:01:40.4338569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4339014Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4339526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:40.4340042Z self_attention_outputs = self.attention( 2025-08-14T22:01:40.4340544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4341057Z return func(*args, **kwargs) 2025-08-14T22:01:40.4343656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:40.4344145Z self_outputs = self.self( 2025-08-14T22:01:40.4344582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4345084Z return func(*args, **kwargs) 2025-08-14T22:01:40.4345556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:40.4346150Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:40.4346389Z 2025-08-14T22:01:40.4346488Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4346735Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4347020Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4347457Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4347861Z return mod(**inputs) 2025-08-14T22:01:40.4348326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4349177Z outputs = self.roberta( 2025-08-14T22:01:40.4349638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4350130Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4350615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4351154Z layer_outputs = layer_module( 2025-08-14T22:01:40.4351585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4352037Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4352573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:40.4353074Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:40.4353572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:40.4354062Z return forward_fn(*input_tensors) 2025-08-14T22:01:40.4354588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:40.4355180Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:40.4355861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:40.4356406Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:40.4356873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:40.4357304Z return self.act(input) 2025-08-14T22:01:40.4357444Z 2025-08-14T22:01:40.4357555Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4357806Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4358048Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4358302Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4358546Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4358783Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4359029Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4359279Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4359554Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4360001Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4360442Z return mod(**inputs) 2025-08-14T22:01:40.4360912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4361513Z outputs = self.roberta( 2025-08-14T22:01:40.4361983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4362476Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4362951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4363448Z layer_outputs = layer_module( 2025-08-14T22:01:40.4363920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4364378Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4364865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:40.4365368Z self_attention_outputs = self.attention( 2025-08-14T22:01:40.4365844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4366300Z return func(*args, **kwargs) 2025-08-14T22:01:40.4366772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:40.4367258Z self_outputs = self.self( 2025-08-14T22:01:40.4367703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4368157Z return func(*args, **kwargs) 2025-08-14T22:01:40.4368626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:40.4369208Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:40.4369444Z 2025-08-14T22:01:40.4369547Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4369841Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4370153Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4374788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4375181Z return mod(**inputs) 2025-08-14T22:01:40.4375647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4376140Z outputs = self.roberta( 2025-08-14T22:01:40.4376608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4377094Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4377580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4378079Z layer_outputs = layer_module( 2025-08-14T22:01:40.4378504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4378960Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4379460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:40.4379966Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:40.4380462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:40.4380953Z return forward_fn(*input_tensors) 2025-08-14T22:01:40.4381488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:40.4382073Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:40.4382653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:40.4383194Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:40.4383669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:40.4384085Z return self.act(input) 2025-08-14T22:01:40.4384230Z 2025-08-14T22:01:40.4384357Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4384635Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4384961Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4385201Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4385476Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4385726Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4385969Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4386211Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4386497Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4386937Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4387341Z return mod(**inputs) 2025-08-14T22:01:40.4387802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4388294Z outputs = self.roberta( 2025-08-14T22:01:40.4388756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4389244Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4389727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4390233Z layer_outputs = layer_module( 2025-08-14T22:01:40.4390659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4391106Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4391644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:40.4392148Z self_attention_outputs = self.attention( 2025-08-14T22:01:40.4392623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4393091Z return func(*args, **kwargs) 2025-08-14T22:01:40.4393556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:40.4394039Z self_outputs = self.self( 2025-08-14T22:01:40.4394485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4394951Z return func(*args, **kwargs) 2025-08-14T22:01:40.4395419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:40.4395987Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:40.4396221Z 2025-08-14T22:01:40.4396328Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4396586Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4396865Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4397312Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4397713Z return mod(**inputs) 2025-08-14T22:01:40.4398173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4398668Z outputs = self.roberta( 2025-08-14T22:01:40.4403423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4403983Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4404467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4404960Z layer_outputs = layer_module( 2025-08-14T22:01:40.4405393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4405841Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4406342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:40.4406847Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:40.4407424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:40.4407910Z return forward_fn(*input_tensors) 2025-08-14T22:01:40.4408441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:40.4409043Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:40.4409588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:40.4410132Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:40.4410612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:40.4411042Z return self.act(input) 2025-08-14T22:01:40.4411180Z 2025-08-14T22:01:40.4411281Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4411537Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4411813Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4412050Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4412293Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4412539Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4412788Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4413050Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4413363Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4413898Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4414295Z return mod(**inputs) 2025-08-14T22:01:40.4414763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4415255Z outputs = self.roberta( 2025-08-14T22:01:40.4415718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4416206Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4416684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4417171Z layer_outputs = layer_module( 2025-08-14T22:01:40.4417597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4418044Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4418538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:40.4419040Z self_attention_outputs = self.attention( 2025-08-14T22:01:40.4419510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4419974Z return func(*args, **kwargs) 2025-08-14T22:01:40.4420450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:40.4420936Z self_outputs = self.self( 2025-08-14T22:01:40.4421416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4421882Z return func(*args, **kwargs) 2025-08-14T22:01:40.4422356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:40.4422916Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:40.4423157Z 2025-08-14T22:01:40.4423256Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4423513Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4423796Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4424242Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4424670Z return mod(**inputs) 2025-08-14T22:01:40.4425143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4425630Z outputs = self.roberta( 2025-08-14T22:01:40.4426096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4426589Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4427071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4427552Z layer_outputs = layer_module( 2025-08-14T22:01:40.4428037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4432728Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4433223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:40.4433764Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:40.4434275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:40.4434803Z return forward_fn(*input_tensors) 2025-08-14T22:01:40.4435332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:40.4435927Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:40.4436477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:40.4437009Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:40.4437476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:40.4437901Z return self.act(input) 2025-08-14T22:01:40.4438044Z 2025-08-14T22:01:40.4438149Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4438391Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4438637Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4438881Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4439122Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4439370Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4439618Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4439860Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4440137Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4440583Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4440983Z return mod(**inputs) 2025-08-14T22:01:40.4441505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4441996Z outputs = self.roberta( 2025-08-14T22:01:40.4442513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4443097Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4443578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4444066Z layer_outputs = layer_module( 2025-08-14T22:01:40.4444494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4444937Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4445431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:40.4445933Z self_attention_outputs = self.attention( 2025-08-14T22:01:40.4446434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4446900Z return func(*args, **kwargs) 2025-08-14T22:01:40.4447375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:40.4447865Z self_outputs = self.self( 2025-08-14T22:01:40.4448304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4449150Z return func(*args, **kwargs) 2025-08-14T22:01:40.4449628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:40.4450193Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:40.4450429Z 2025-08-14T22:01:40.4450530Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4450787Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4451084Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4451577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4451984Z return mod(**inputs) 2025-08-14T22:01:40.4452449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4452973Z outputs = self.roberta( 2025-08-14T22:01:40.4453429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4453923Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4454414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4454915Z layer_outputs = layer_module( 2025-08-14T22:01:40.4455345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4455802Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4456296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:40.4456808Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:40.4465697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:40.4466349Z return forward_fn(*input_tensors) 2025-08-14T22:01:40.4467043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:40.4467828Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:40.4468468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:40.4469003Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:40.4469480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:40.4469894Z return self.act(input) 2025-08-14T22:01:40.4470083Z 2025-08-14T22:01:40.4470181Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4470438Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4470680Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4470919Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4471159Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4471440Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4471684Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4471996Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4472277Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4472754Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4473156Z return mod(**inputs) 2025-08-14T22:01:40.4473617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4474102Z outputs = self.roberta( 2025-08-14T22:01:40.4474568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4475057Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4475591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4476080Z layer_outputs = layer_module( 2025-08-14T22:01:40.4476512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4476961Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4477456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:40.4477990Z self_attention_outputs = self.attention( 2025-08-14T22:01:40.4478476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4478946Z return func(*args, **kwargs) 2025-08-14T22:01:40.4479456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:40.4479942Z self_outputs = self.self( 2025-08-14T22:01:40.4480393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4480848Z return func(*args, **kwargs) 2025-08-14T22:01:40.4481415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:40.4481987Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:40.4482224Z 2025-08-14T22:01:40.4482331Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4482584Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4482872Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4483324Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4483727Z return mod(**inputs) 2025-08-14T22:01:40.4493344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4493988Z outputs = self.roberta( 2025-08-14T22:01:40.4494488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4494996Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4495498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4496017Z layer_outputs = layer_module( 2025-08-14T22:01:40.4496452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4497003Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4497521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:40.4498028Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:40.4498538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:40.4499036Z return forward_fn(*input_tensors) 2025-08-14T22:01:40.4499581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:40.4500206Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:40.4507283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:40.4507844Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:40.4508327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:40.4508768Z return self.act(input) 2025-08-14T22:01:40.4508923Z 2025-08-14T22:01:40.4509026Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4509295Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4509547Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4509802Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4510056Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4510296Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4510553Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4510799Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4511079Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4511580Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4511988Z return mod(**inputs) 2025-08-14T22:01:40.4512470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4512988Z outputs = self.roberta( 2025-08-14T22:01:40.4513461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4513960Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4514440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4514982Z layer_outputs = layer_module( 2025-08-14T22:01:40.4515504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4515964Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4516466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T22:01:40.4516979Z self_attention_outputs = self.attention( 2025-08-14T22:01:40.4517471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4517946Z return func(*args, **kwargs) 2025-08-14T22:01:40.4518416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T22:01:40.4518910Z self_outputs = self.self( 2025-08-14T22:01:40.4519365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T22:01:40.4519821Z return func(*args, **kwargs) 2025-08-14T22:01:40.4520305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T22:01:40.4520882Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T22:01:40.4521123Z 2025-08-14T22:01:40.4521354Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4521612Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4521903Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4522366Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4522768Z return mod(**inputs) 2025-08-14T22:01:40.4523243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T22:01:40.4523744Z outputs = self.roberta( 2025-08-14T22:01:40.4524240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T22:01:40.4524730Z encoder_outputs = self.encoder( 2025-08-14T22:01:40.4525215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T22:01:40.4525713Z layer_outputs = layer_module( 2025-08-14T22:01:40.4526143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:01:40.4526606Z return super().__call__(*args, **kwargs) 2025-08-14T22:01:40.4527106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T22:01:40.4527616Z layer_output = apply_chunking_to_forward( 2025-08-14T22:01:40.4528111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:40.4528609Z return forward_fn(*input_tensors) 2025-08-14T22:01:40.4529142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T22:01:40.4534016Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:01:40.4534568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T22:01:40.4535153Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:01:40.4535640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:40.4536063Z return self.act(input) 2025-08-14T22:01:40.4536216Z 2025-08-14T22:01:40.4536320Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4536586Z cudagraph partition due to non gpu ops 2025-08-14T22:01:40.4536875Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4537320Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4537728Z return mod(**inputs) 2025-08-14T22:01:40.4538198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1548, in forward 2025-08-14T22:01:40.4538724Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T22:01:40.4538932Z 2025-08-14T22:01:40.4539064Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:40.4539516Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:40.4539916Z return mod(**inputs) 2025-08-14T22:01:40.4540372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1549, in forward 2025-08-14T22:01:40.4540898Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T22:01:40.4541084Z 2025-08-14T22:01:45.9381306Z Compilation time (from dynamo_timed): 18.347213988 2025-08-14T22:01:45.9382279Z pass 2025-08-14T22:01:45.9388101Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:01:45.9392977Z TIMING: _recursive_pre_grad_passes:0.04918 _recursive_joint_graph_passes:0.52919 _recursive_post_grad_passes:0.11708 async_compile.wait:0.00369 code_gen:4.84446 inductor_compile:8.57015 backend_compile:14.67986 gc:0.00147 entire_frame_compile:18.34721 total_wall_time:18.34721 2025-08-14T22:01:45.9394136Z STATS: call_* op count: 303 | FakeTensorMode.__torch_dispatch__:24185 | FakeTensor.__torch_dispatch__:3941 | ProxyTorchDispatchMode.__torch_dispatch__:5386 2025-08-14T22:01:45.9394770Z Dynamo produced 1 graphs covering 303 ops with 0 graph breaks (0 unique) 2025-08-14T22:01:52.3466513Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T22:01:52.3467859Z from pkg_resources import resource_filename 2025-08-14T22:01:53.0644010Z 2025-08-14T22:01:54.7099758Z loading model: 0it [00:00, ?it/s] 2025-08-14T22:01:54.7100153Z loading model: 0it [00:01, ?it/s] 2025-08-14T22:01:54.7119279Z cpu eval T5ForConditionalGeneration 2025-08-14T22:01:56.9622708Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:01:57.8567284Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:01:58.8282697Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:02:16.7456924Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7457700Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7458121Z return mod(**inputs) 2025-08-14T22:02:16.7458606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.7459477Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.7460201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7460752Z layer_outputs = layer_module( 2025-08-14T22:02:16.7461254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7461716Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7462196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7462679Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7463141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7463618Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7464263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 546, in forward 2025-08-14T22:02:16.7464921Z position_bias = position_bias + causal_mask 2025-08-14T22:02:16.7465116Z 2025-08-14T22:02:16.7465262Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7465724Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7466139Z return mod(**inputs) 2025-08-14T22:02:16.7466569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.7467034Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.7467492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7467958Z layer_outputs = layer_module( 2025-08-14T22:02:16.7468384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7468842Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7469376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7469844Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7470322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7470820Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7471331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.7471788Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.7471970Z 2025-08-14T22:02:16.7472104Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7472609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7473020Z return mod(**inputs) 2025-08-14T22:02:16.7473447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.7473918Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.7474378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7474830Z layer_outputs = layer_module( 2025-08-14T22:02:16.7475261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7475722Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7476176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7476647Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7477130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7477638Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7478098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.7478563Z key_states = self.k(current_states) 2025-08-14T22:02:16.7483050Z 2025-08-14T22:02:16.7483187Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7483650Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7484051Z return mod(**inputs) 2025-08-14T22:02:16.7484488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.7484950Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.7485402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7485864Z layer_outputs = layer_module( 2025-08-14T22:02:16.7486300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7486757Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7487220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7487699Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7488169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7488645Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7489104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.7489712Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.7489963Z 2025-08-14T22:02:16.7490164Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7490673Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7491213Z return mod(**inputs) 2025-08-14T22:02:16.7491816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.7492410Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.7492939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7493725Z layer_outputs = layer_module( 2025-08-14T22:02:16.7494162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7494609Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7495072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7495583Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7496055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7496521Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7496984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.7497455Z value_states = self.v(current_states) 2025-08-14T22:02:16.7497626Z 2025-08-14T22:02:16.7497847Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7498399Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7498868Z return mod(**inputs) 2025-08-14T22:02:16.7499401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.7499987Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.7500557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7501211Z layer_outputs = layer_module( 2025-08-14T22:02:16.7501713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7502262Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7502875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7503399Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7503966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7504609Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7505130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.7505704Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.7505966Z 2025-08-14T22:02:16.7506140Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7506704Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7507182Z return mod(**inputs) 2025-08-14T22:02:16.7516190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.7516831Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.7517498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7518138Z layer_outputs = layer_module( 2025-08-14T22:02:16.7518742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7519343Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7519855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7520378Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7520971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7521546Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7522061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.7524731Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.7524941Z 2025-08-14T22:02:16.7525075Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7525528Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7526012Z return mod(**inputs) 2025-08-14T22:02:16.7526478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.7527012Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.7527465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7527926Z layer_outputs = layer_module( 2025-08-14T22:02:16.7528349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7528804Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7529277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7529745Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7530211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7530691Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7531243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.7531702Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.7531873Z 2025-08-14T22:02:16.7532007Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7532493Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7532892Z return mod(**inputs) 2025-08-14T22:02:16.7533324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7533784Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7534237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7534693Z layer_outputs = layer_module( 2025-08-14T22:02:16.7535129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7535581Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7536043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7536503Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7537062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7537562Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7538018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.7538484Z value_states = self.v(current_states) 2025-08-14T22:02:16.7538657Z 2025-08-14T22:02:16.7538787Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7539236Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7539632Z return mod(**inputs) 2025-08-14T22:02:16.7540060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7540571Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7541019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7541482Z layer_outputs = layer_module( 2025-08-14T22:02:16.7541912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7542364Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7542821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7543286Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7543778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7544257Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7544712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.7545176Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.7545350Z 2025-08-14T22:02:16.7545487Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7545926Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7546332Z return mod(**inputs) 2025-08-14T22:02:16.7546759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7547227Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7547675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7548135Z layer_outputs = layer_module( 2025-08-14T22:02:16.7548595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7549440Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7549906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7550472Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7550936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7555564Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7556092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.7556568Z key_states = self.k(current_states) 2025-08-14T22:02:16.7556736Z 2025-08-14T22:02:16.7556873Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7557330Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7557741Z return mod(**inputs) 2025-08-14T22:02:16.7558177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7558643Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7559100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7559559Z layer_outputs = layer_module( 2025-08-14T22:02:16.7559984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7560431Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7560888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7561440Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7561895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7562365Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7562871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.7563407Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.7563634Z 2025-08-14T22:02:16.7563763Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7564204Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7564604Z return mod(**inputs) 2025-08-14T22:02:16.7565020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7565482Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7566052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7566549Z layer_outputs = layer_module( 2025-08-14T22:02:16.7566970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7567418Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7567876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7568335Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7568794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7569263Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7569724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.7570222Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.7570470Z 2025-08-14T22:02:16.7570601Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7571046Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7571476Z return mod(**inputs) 2025-08-14T22:02:16.7571895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7572360Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7572822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7573271Z layer_outputs = layer_module( 2025-08-14T22:02:16.7573703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7574152Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7574619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7575081Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7575548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7576027Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7576481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.7577005Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.7577220Z 2025-08-14T22:02:16.7577349Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7577796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7578187Z return mod(**inputs) 2025-08-14T22:02:16.7578722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7579198Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7579693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7584383Z layer_outputs = layer_module( 2025-08-14T22:02:16.7584868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7585321Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7585778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7586249Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7586713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7587179Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7587668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.7588136Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.7588297Z 2025-08-14T22:02:16.7588436Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7588902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7589328Z return mod(**inputs) 2025-08-14T22:02:16.7589754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.7590210Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.7590650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7591108Z layer_outputs = layer_module( 2025-08-14T22:02:16.7591542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7592019Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7592476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.7592949Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.7593447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.7593913Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.7594383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.7594917Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.7595126Z 2025-08-14T22:02:16.7595272Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7595709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7596113Z return mod(**inputs) 2025-08-14T22:02:16.7596543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7597000Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7597498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7597966Z layer_outputs = layer_module( 2025-08-14T22:02:16.7598395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7598842Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7599307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7599793Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7600271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7600794Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7601403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:16.7601873Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:16.7602046Z 2025-08-14T22:02:16.7602176Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7602621Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7603032Z return mod(**inputs) 2025-08-14T22:02:16.7603479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7603936Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7604428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7604899Z layer_outputs = layer_module( 2025-08-14T22:02:16.7605326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7605783Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7606245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7606727Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7607198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7607706Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7608212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:16.7608667Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:16.7608842Z 2025-08-14T22:02:16.7608974Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7613680Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7614083Z return mod(**inputs) 2025-08-14T22:02:16.7614505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7615017Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7615473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7615928Z layer_outputs = layer_module( 2025-08-14T22:02:16.7616347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7616793Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7617251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7617723Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7618196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7618710Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7619222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:16.7619677Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:16.7619854Z 2025-08-14T22:02:16.7619983Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7620434Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7620827Z return mod(**inputs) 2025-08-14T22:02:16.7621256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7621727Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7622189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7622639Z layer_outputs = layer_module( 2025-08-14T22:02:16.7623092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7623556Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7624126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7624594Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7625060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7625535Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7626086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.7626556Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.7626732Z 2025-08-14T22:02:16.7626862Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7627315Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7627714Z return mod(**inputs) 2025-08-14T22:02:16.7628147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7628678Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7629128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7629596Z layer_outputs = layer_module( 2025-08-14T22:02:16.7630026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7630477Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7630958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7631429Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7631891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7632373Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7632832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.7633288Z key_states = self.k(current_states) 2025-08-14T22:02:16.7633454Z 2025-08-14T22:02:16.7633591Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7634028Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7634424Z return mod(**inputs) 2025-08-14T22:02:16.7634852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7635316Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7635761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7636219Z layer_outputs = layer_module( 2025-08-14T22:02:16.7636648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7637089Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7637548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7638015Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7642722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7643188Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7643653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.7644182Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.7644444Z 2025-08-14T22:02:16.7644574Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7645023Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7645433Z return mod(**inputs) 2025-08-14T22:02:16.7645864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7646324Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7646781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7647242Z layer_outputs = layer_module( 2025-08-14T22:02:16.7647690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7648145Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7648611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7649447Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7649909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7650377Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7650841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.7651303Z value_states = self.v(current_states) 2025-08-14T22:02:16.7651471Z 2025-08-14T22:02:16.7651601Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7652051Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7652529Z return mod(**inputs) 2025-08-14T22:02:16.7653034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7653556Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7654045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7654508Z layer_outputs = layer_module( 2025-08-14T22:02:16.7654928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7655382Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7655844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7656307Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7656773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7657290Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7657751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.7658250Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.7658465Z 2025-08-14T22:02:16.7658598Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7659046Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7659446Z return mod(**inputs) 2025-08-14T22:02:16.7659865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7660324Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7660780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7661230Z layer_outputs = layer_module( 2025-08-14T22:02:16.7661660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7662150Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7662613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7663075Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7663534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7664006Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7664461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.7664961Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.7665208Z 2025-08-14T22:02:16.7665342Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7665783Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7666179Z return mod(**inputs) 2025-08-14T22:02:16.7666613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7667079Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7675929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7676428Z layer_outputs = layer_module( 2025-08-14T22:02:16.7676863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7677319Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7677775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7678282Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7678752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7679232Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7679711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.7680177Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.7680341Z 2025-08-14T22:02:16.7680477Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7680923Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7681409Z return mod(**inputs) 2025-08-14T22:02:16.7684039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7684511Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7684965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7685425Z layer_outputs = layer_module( 2025-08-14T22:02:16.7685879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7686372Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7686833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7687320Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7687803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7688311Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7688823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:16.7689285Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:16.7689450Z 2025-08-14T22:02:16.7689585Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7690049Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7690454Z return mod(**inputs) 2025-08-14T22:02:16.7690881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7691333Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7691783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7692238Z layer_outputs = layer_module( 2025-08-14T22:02:16.7692666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7693133Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7693592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7694071Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7694538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7695061Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7695568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:16.7696031Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:16.7696280Z 2025-08-14T22:02:16.7696409Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7696910Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7697312Z return mod(**inputs) 2025-08-14T22:02:16.7697795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7698258Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7698709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7699193Z layer_outputs = layer_module( 2025-08-14T22:02:16.7699613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7700064Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7700525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7701011Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7701479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7701995Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7702503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:16.7702963Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:16.7703142Z 2025-08-14T22:02:16.7703270Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7703719Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7704125Z return mod(**inputs) 2025-08-14T22:02:16.7704543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7705010Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7705463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7705929Z layer_outputs = layer_module( 2025-08-14T22:02:16.7706358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7706810Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7707305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7707774Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7708239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7708707Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7709170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.7709623Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.7709800Z 2025-08-14T22:02:16.7709928Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7710396Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7715043Z return mod(**inputs) 2025-08-14T22:02:16.7715529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7715995Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7716446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7716901Z layer_outputs = layer_module( 2025-08-14T22:02:16.7717328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7717774Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7718225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7718701Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7719167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7719666Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7720123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.7720609Z key_states = self.k(current_states) 2025-08-14T22:02:16.7720778Z 2025-08-14T22:02:16.7720913Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7721446Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7721841Z return mod(**inputs) 2025-08-14T22:02:16.7722271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7722734Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7723178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7723650Z layer_outputs = layer_module( 2025-08-14T22:02:16.7724080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7724537Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7724992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7725548Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7726082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7726545Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7727017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.7727548Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.7727781Z 2025-08-14T22:02:16.7727923Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7728361Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7728795Z return mod(**inputs) 2025-08-14T22:02:16.7729226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7729689Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7730138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7730593Z layer_outputs = layer_module( 2025-08-14T22:02:16.7731019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7731463Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7731954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7732431Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7732904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7733379Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7733849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.7734315Z value_states = self.v(current_states) 2025-08-14T22:02:16.7734485Z 2025-08-14T22:02:16.7735025Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7735469Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7735878Z return mod(**inputs) 2025-08-14T22:02:16.7736316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7736800Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7737257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7737719Z layer_outputs = layer_module( 2025-08-14T22:02:16.7738147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7738619Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7739082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7739559Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7744313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7744787Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7745254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.7745761Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.7745966Z 2025-08-14T22:02:16.7746096Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7746545Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7746952Z return mod(**inputs) 2025-08-14T22:02:16.7747371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7747838Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7748340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7749152Z layer_outputs = layer_module( 2025-08-14T22:02:16.7749577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7750036Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7750507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7751052Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7751509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7751986Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7752448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.7752948Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.7753161Z 2025-08-14T22:02:16.7753295Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7753748Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7754261Z return mod(**inputs) 2025-08-14T22:02:16.7754744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7755207Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7755667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7756122Z layer_outputs = layer_module( 2025-08-14T22:02:16.7756555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7757065Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7757535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7774811Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7775388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7776020Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7776515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.7776994Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.7777206Z 2025-08-14T22:02:16.7777357Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7777810Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7778225Z return mod(**inputs) 2025-08-14T22:02:16.7778675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7779143Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7779612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7780085Z layer_outputs = layer_module( 2025-08-14T22:02:16.7780531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7780982Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7781468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7781961Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7782447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7782959Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7783684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:16.7784163Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:16.7784336Z 2025-08-14T22:02:16.7784471Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7784926Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7785343Z return mod(**inputs) 2025-08-14T22:02:16.7785813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7786274Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7786733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7787193Z layer_outputs = layer_module( 2025-08-14T22:02:16.7787621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7788139Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7788610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7789129Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7789601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7790118Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7790634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:16.7791110Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:16.7791286Z 2025-08-14T22:02:16.7791419Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7791872Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7792282Z return mod(**inputs) 2025-08-14T22:02:16.7792707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7793171Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7793628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7794119Z layer_outputs = layer_module( 2025-08-14T22:02:16.7794546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7795028Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7795496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7795974Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7796457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7796977Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7797490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:16.7806361Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:16.7806592Z 2025-08-14T22:02:16.7806745Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7807340Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7807879Z return mod(**inputs) 2025-08-14T22:02:16.7808441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7809014Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7809470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7809927Z layer_outputs = layer_module( 2025-08-14T22:02:16.7810366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7810825Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7811296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7811771Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7812343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7812881Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7813343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.7813812Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.7813988Z 2025-08-14T22:02:16.7814120Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7814575Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7814977Z return mod(**inputs) 2025-08-14T22:02:16.7815439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7815918Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7816375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7816891Z layer_outputs = layer_module( 2025-08-14T22:02:16.7817327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7817780Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7818235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7818710Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7819179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7819653Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7820118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.7820613Z key_states = self.k(current_states) 2025-08-14T22:02:16.7820780Z 2025-08-14T22:02:16.7820922Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7821388Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7821792Z return mod(**inputs) 2025-08-14T22:02:16.7822223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7822689Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7823135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7823602Z layer_outputs = layer_module( 2025-08-14T22:02:16.7824035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7824492Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7824959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7825434Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7825909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7826374Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7826933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.7827521Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.7827755Z 2025-08-14T22:02:16.7827895Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7828343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7828752Z return mod(**inputs) 2025-08-14T22:02:16.7829187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7829675Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7830132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7830596Z layer_outputs = layer_module( 2025-08-14T22:02:16.7831031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7831474Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7831940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7832413Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7832900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7833381Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7833853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.7834324Z value_states = self.v(current_states) 2025-08-14T22:02:16.7834499Z 2025-08-14T22:02:16.7834631Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7835087Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7835493Z return mod(**inputs) 2025-08-14T22:02:16.7835916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7836380Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7836841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7837306Z layer_outputs = layer_module( 2025-08-14T22:02:16.7837759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7838211Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7838683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7839175Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7839638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7840119Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7840582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.7841086Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.7847668Z 2025-08-14T22:02:16.7847808Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7848272Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7849100Z return mod(**inputs) 2025-08-14T22:02:16.7849540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7850016Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7850473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7850942Z layer_outputs = layer_module( 2025-08-14T22:02:16.7851367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7851814Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7852268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7852732Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7853197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7853670Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7854240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.7854742Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.7854954Z 2025-08-14T22:02:16.7855089Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7855528Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7856013Z return mod(**inputs) 2025-08-14T22:02:16.7856489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7856950Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7857446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7857899Z layer_outputs = layer_module( 2025-08-14T22:02:16.7858329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7858779Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7859238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7859700Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7860158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7860629Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7861080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.7861539Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.7861746Z 2025-08-14T22:02:16.7861877Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7862325Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7862722Z return mod(**inputs) 2025-08-14T22:02:16.7863183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7863647Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7864093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7864551Z layer_outputs = layer_module( 2025-08-14T22:02:16.7864985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7865437Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7865894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7866370Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7866839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 609, in forward 2025-08-14T22:02:16.7867373Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T22:02:16.7867612Z 2025-08-14T22:02:16.7867740Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7868179Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7868576Z return mod(**inputs) 2025-08-14T22:02:16.7868997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7869456Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7869909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7874547Z layer_outputs = layer_module( 2025-08-14T22:02:16.7875045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7875499Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7875962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7876435Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7876909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7877428Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7877942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:16.7878423Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:16.7878603Z 2025-08-14T22:02:16.7878761Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7879222Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7879623Z return mod(**inputs) 2025-08-14T22:02:16.7880049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7880523Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7880970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7881494Z layer_outputs = layer_module( 2025-08-14T22:02:16.7881920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7882373Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7882833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7883340Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7883816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7884344Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7884927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:16.7885450Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:16.7885619Z 2025-08-14T22:02:16.7885755Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7886191Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7886592Z return mod(**inputs) 2025-08-14T22:02:16.7887019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7887482Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7887924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7888391Z layer_outputs = layer_module( 2025-08-14T22:02:16.7888823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7889266Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7889732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7890209Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7890684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7891189Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7891692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:16.7892162Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:16.7892334Z 2025-08-14T22:02:16.7892487Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7892934Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7893339Z return mod(**inputs) 2025-08-14T22:02:16.7893768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7894224Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7894678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7895141Z layer_outputs = layer_module( 2025-08-14T22:02:16.7895590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7896045Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7896507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7896979Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7897438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7897909Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7898371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.7898833Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.7899004Z 2025-08-14T22:02:16.7903364Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7903813Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7904228Z return mod(**inputs) 2025-08-14T22:02:16.7904681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7905156Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7905622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7906117Z layer_outputs = layer_module( 2025-08-14T22:02:16.7906548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7907006Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7907468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7907927Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7908395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7908869Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7909332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.7909787Z key_states = self.k(current_states) 2025-08-14T22:02:16.7909962Z 2025-08-14T22:02:16.7910090Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7910535Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7910935Z return mod(**inputs) 2025-08-14T22:02:16.7911355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7911809Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7912259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7912714Z layer_outputs = layer_module( 2025-08-14T22:02:16.7913146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7913661Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7914200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7914665Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7915125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7915590Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7916045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.7916573Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.7916808Z 2025-08-14T22:02:16.7916966Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7917408Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7917802Z return mod(**inputs) 2025-08-14T22:02:16.7918236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7918704Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7919155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7919616Z layer_outputs = layer_module( 2025-08-14T22:02:16.7920046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7920543Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7920995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7921533Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7922029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7922498Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7922960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.7923465Z value_states = self.v(current_states) 2025-08-14T22:02:16.7923634Z 2025-08-14T22:02:16.7923771Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7924206Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7924604Z return mod(**inputs) 2025-08-14T22:02:16.7925031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7925489Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7925936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7926392Z layer_outputs = layer_module( 2025-08-14T22:02:16.7926821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7927275Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7927748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7932468Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7932939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7933403Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7933867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.7934364Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.7934567Z 2025-08-14T22:02:16.7934710Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7935175Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7935580Z return mod(**inputs) 2025-08-14T22:02:16.7936011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7936464Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7936918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7937372Z layer_outputs = layer_module( 2025-08-14T22:02:16.7937797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7938236Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7938713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7939181Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7939637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7940108Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7940566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.7941061Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.7941263Z 2025-08-14T22:02:16.7941389Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7941831Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7942231Z return mod(**inputs) 2025-08-14T22:02:16.7942721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7943250Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7943710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7944167Z layer_outputs = layer_module( 2025-08-14T22:02:16.7944613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7945062Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7945523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7945995Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7946450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7946917Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7947437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.7947895Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.7948068Z 2025-08-14T22:02:16.7948199Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7948645Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7949397Z return mod(**inputs) 2025-08-14T22:02:16.7949819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7950281Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7950731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7951178Z layer_outputs = layer_module( 2025-08-14T22:02:16.7951612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7952073Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7952547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7953088Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7953569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7954081Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7954588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:16.7955048Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:16.7955225Z 2025-08-14T22:02:16.7955359Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7955834Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7956235Z return mod(**inputs) 2025-08-14T22:02:16.7956667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7965410Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7966015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7966500Z layer_outputs = layer_module( 2025-08-14T22:02:16.7966926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7967375Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7967826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7968308Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7968788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7969332Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7969827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:16.7970322Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:16.7970494Z 2025-08-14T22:02:16.7970627Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7971070Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7971466Z return mod(**inputs) 2025-08-14T22:02:16.7974064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7974529Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7974972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7975433Z layer_outputs = layer_module( 2025-08-14T22:02:16.7975861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7976362Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7976819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.7977303Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.7977774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.7978277Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.7978781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:16.7979244Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:16.7979414Z 2025-08-14T22:02:16.7979552Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7979994Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7980399Z return mod(**inputs) 2025-08-14T22:02:16.7980855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7981322Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7981765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7982228Z layer_outputs = layer_module( 2025-08-14T22:02:16.7982658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7983106Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7983594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7984067Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7984537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7985002Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7985469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.7985934Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.7986176Z 2025-08-14T22:02:16.7986307Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7986806Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7987214Z return mod(**inputs) 2025-08-14T22:02:16.7987640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7988097Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7988573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7989033Z layer_outputs = layer_module( 2025-08-14T22:02:16.7989456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7989926Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7990385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7990853Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7991311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7991774Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7992233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.7992692Z key_states = self.k(current_states) 2025-08-14T22:02:16.7992857Z 2025-08-14T22:02:16.7992985Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.7993428Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.7993828Z return mod(**inputs) 2025-08-14T22:02:16.7994244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.7994700Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.7995151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.7995608Z layer_outputs = layer_module( 2025-08-14T22:02:16.7996033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.7996485Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.7996948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.7997411Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.7997903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.7998379Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.7998847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.7999367Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.7999605Z 2025-08-14T22:02:16.7999734Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8000186Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8000593Z return mod(**inputs) 2025-08-14T22:02:16.8005365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.8005839Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.8006297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8006749Z layer_outputs = layer_module( 2025-08-14T22:02:16.8007187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8007642Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8008111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8008571Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8009036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8009507Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8009997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.8010462Z value_states = self.v(current_states) 2025-08-14T22:02:16.8010636Z 2025-08-14T22:02:16.8010767Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8011234Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8011632Z return mod(**inputs) 2025-08-14T22:02:16.8012057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.8012516Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.8012957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8013412Z layer_outputs = layer_module( 2025-08-14T22:02:16.8013843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8014292Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8014751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8015285Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8015807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8016280Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8016728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.8017231Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.8017433Z 2025-08-14T22:02:16.8017570Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8018006Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8018414Z return mod(**inputs) 2025-08-14T22:02:16.8018836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.8019319Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.8019765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8020223Z layer_outputs = layer_module( 2025-08-14T22:02:16.8020648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8021086Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8021554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8022024Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8022512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8022980Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8023632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.8024403Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.8024720Z 2025-08-14T22:02:16.8024912Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8025356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8025762Z return mod(**inputs) 2025-08-14T22:02:16.8026193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.8026647Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.8027107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8027611Z layer_outputs = layer_module( 2025-08-14T22:02:16.8028172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8028851Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8029553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8034347Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8034800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8035273Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8035739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.8036215Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.8036379Z 2025-08-14T22:02:16.8036515Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8036971Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8037385Z return mod(**inputs) 2025-08-14T22:02:16.8037816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.8038284Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.8038784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8039242Z layer_outputs = layer_module( 2025-08-14T22:02:16.8039663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8040119Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8040581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8041048Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8041570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 609, in forward 2025-08-14T22:02:16.8042132Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T22:02:16.8042369Z 2025-08-14T22:02:16.8042508Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8042945Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8043352Z return mod(**inputs) 2025-08-14T22:02:16.8043785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.8044313Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.8044810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8045289Z layer_outputs = layer_module( 2025-08-14T22:02:16.8045720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8046159Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8046465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8046579Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8046879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8047029Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8047321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:16.8047432Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:16.8047445Z 2025-08-14T22:02:16.8047573Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8047852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8047934Z return mod(**inputs) 2025-08-14T22:02:16.8048237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.8050635Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.8050943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8051034Z layer_outputs = layer_module( 2025-08-14T22:02:16.8051323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8051422Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8051728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8051845Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8052149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8052303Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8052600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:16.8052712Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:16.8052728Z 2025-08-14T22:02:16.8052858Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8053108Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8053202Z return mod(**inputs) 2025-08-14T22:02:16.8053507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:16.8053598Z encoder_outputs = self.encoder( 2025-08-14T22:02:16.8053909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8053999Z layer_outputs = layer_module( 2025-08-14T22:02:16.8054354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8054459Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8054750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8054868Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8055160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8055303Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8055649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:16.8055756Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:16.8055769Z 2025-08-14T22:02:16.8055905Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8056161Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8056245Z return mod(**inputs) 2025-08-14T22:02:16.8056550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8056644Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8056949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8057040Z layer_outputs = layer_module( 2025-08-14T22:02:16.8057320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8057429Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8057760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8057862Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8058172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8058308Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8062791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.8062900Z key_states = self.k(current_states) 2025-08-14T22:02:16.8062913Z 2025-08-14T22:02:16.8063045Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8063309Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8063394Z return mod(**inputs) 2025-08-14T22:02:16.8063701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8063810Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8064118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8064222Z layer_outputs = layer_module( 2025-08-14T22:02:16.8064509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8064609Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8064911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8065014Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8065316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8065428Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8065725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.8065894Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.8065932Z 2025-08-14T22:02:16.8066059Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8066308Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8066396Z return mod(**inputs) 2025-08-14T22:02:16.8066693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8066789Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8067086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8067173Z layer_outputs = layer_module( 2025-08-14T22:02:16.8067490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8067588Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8067890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8067991Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8068284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8068395Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8068687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.8068784Z value_states = self.v(current_states) 2025-08-14T22:02:16.8068797Z 2025-08-14T22:02:16.8068928Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8069183Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8069291Z return mod(**inputs) 2025-08-14T22:02:16.8069588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8069678Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8070001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8070090Z layer_outputs = layer_module( 2025-08-14T22:02:16.8070372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8070473Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8070763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8070866Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8071159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8071263Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8071566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.8071701Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.8071713Z 2025-08-14T22:02:16.8071845Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8072091Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8072176Z return mod(**inputs) 2025-08-14T22:02:16.8072484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8072573Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8072874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8072971Z layer_outputs = layer_module( 2025-08-14T22:02:16.8073327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8073459Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8073816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8073918Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8074219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8074324Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8074619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.8074764Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.8074801Z 2025-08-14T22:02:16.8074929Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8075186Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8075271Z return mod(**inputs) 2025-08-14T22:02:16.8075570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8075673Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8075970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8076069Z layer_outputs = layer_module( 2025-08-14T22:02:16.8076353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8076450Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8076758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8076881Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8077172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8077286Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8077602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.8077707Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.8077720Z 2025-08-14T22:02:16.8077848Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8078095Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8078189Z return mod(**inputs) 2025-08-14T22:02:16.8078486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8078591Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8078897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8078987Z layer_outputs = layer_module( 2025-08-14T22:02:16.8079278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8079375Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8079665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8079835Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8080130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8080283Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8080577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:16.8080679Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:16.8080691Z 2025-08-14T22:02:16.8080832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8081108Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8081284Z return mod(**inputs) 2025-08-14T22:02:16.8081591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8081682Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8081985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8082073Z layer_outputs = layer_module( 2025-08-14T22:02:16.8082351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8082482Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8082777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8082909Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8083204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8083354Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8083653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:16.8083761Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:16.8083773Z 2025-08-14T22:02:16.8083900Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8084157Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8084241Z return mod(**inputs) 2025-08-14T22:02:16.8084571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8084661Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8084958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8085083Z layer_outputs = layer_module( 2025-08-14T22:02:16.8085362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8085459Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8085756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8085869Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8086170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8086310Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8086599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:16.8086708Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:16.8086722Z 2025-08-14T22:02:16.8086847Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8087099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8087178Z return mod(**inputs) 2025-08-14T22:02:16.8087479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8087575Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8092089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8092180Z layer_outputs = layer_module( 2025-08-14T22:02:16.8092467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8092561Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8092884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8092986Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8093276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8093387Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8093679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.8093781Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.8093794Z 2025-08-14T22:02:16.8093918Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8094185Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8094276Z return mod(**inputs) 2025-08-14T22:02:16.8094578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8094668Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8094975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8095066Z layer_outputs = layer_module( 2025-08-14T22:02:16.8095356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8095457Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8095749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8095856Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8096175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8096278Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8096579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.8096694Z key_states = self.k(current_states) 2025-08-14T22:02:16.8096707Z 2025-08-14T22:02:16.8096839Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8097083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8097164Z return mod(**inputs) 2025-08-14T22:02:16.8097468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8097556Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8097864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8097959Z layer_outputs = layer_module( 2025-08-14T22:02:16.8098242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8098347Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8098643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8098742Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8099039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8099140Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8099442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.8099604Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.8099618Z 2025-08-14T22:02:16.8099745Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8100000Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8100113Z return mod(**inputs) 2025-08-14T22:02:16.8100411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8100509Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8100806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8100906Z layer_outputs = layer_module( 2025-08-14T22:02:16.8101187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8101288Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8101613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8101713Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8102018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8102196Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8102492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.8102639Z value_states = self.v(current_states) 2025-08-14T22:02:16.8102652Z 2025-08-14T22:02:16.8102779Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8103027Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8103122Z return mod(**inputs) 2025-08-14T22:02:16.8103422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8103549Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8103850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8103946Z layer_outputs = layer_module( 2025-08-14T22:02:16.8104240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8104361Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8104655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8104776Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8105067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8105177Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8105469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.8105606Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.8105619Z 2025-08-14T22:02:16.8105755Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8106006Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8106100Z return mod(**inputs) 2025-08-14T22:02:16.8106394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8106483Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8106838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8106926Z layer_outputs = layer_module( 2025-08-14T22:02:16.8107207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8107310Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8107600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8107728Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8108024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8108124Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8108421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.8108553Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.8108566Z 2025-08-14T22:02:16.8108697Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8108946Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8109050Z return mod(**inputs) 2025-08-14T22:02:16.8109352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8109442Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8109740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8109833Z layer_outputs = layer_module( 2025-08-14T22:02:16.8110110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8110209Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8110499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8110597Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8110893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8111018Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8111312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.8111414Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.8111450Z 2025-08-14T22:02:16.8111574Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8111825Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8111908Z return mod(**inputs) 2025-08-14T22:02:16.8112204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8112300Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8112594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8112690Z layer_outputs = layer_module( 2025-08-14T22:02:16.8112968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8113066Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8113371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8113477Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8113767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8113877Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8114170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.8114272Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.8114285Z 2025-08-14T22:02:16.8114409Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8114664Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8114760Z return mod(**inputs) 2025-08-14T22:02:16.8115083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8115179Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8115483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8115572Z layer_outputs = layer_module( 2025-08-14T22:02:16.8115862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8115959Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8116248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8116382Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8125133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8125265Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8125663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.8125771Z key_states = self.k(current_states) 2025-08-14T22:02:16.8125785Z 2025-08-14T22:02:16.8125942Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8126269Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8126356Z return mod(**inputs) 2025-08-14T22:02:16.8126761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8126859Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8127273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8127405Z layer_outputs = layer_module( 2025-08-14T22:02:16.8127778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8127892Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8128319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8128430Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8128813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8128915Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8129218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.8129378Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.8129393Z 2025-08-14T22:02:16.8129518Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8129773Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8129859Z return mod(**inputs) 2025-08-14T22:02:16.8130166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8130256Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8130555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8130651Z layer_outputs = layer_module( 2025-08-14T22:02:16.8130932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8131028Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8133445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8133548Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8133869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8133974Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8134264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.8134370Z value_states = self.v(current_states) 2025-08-14T22:02:16.8134383Z 2025-08-14T22:02:16.8134511Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8134765Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8134846Z return mod(**inputs) 2025-08-14T22:02:16.8135167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8135272Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8135568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8135666Z layer_outputs = layer_module( 2025-08-14T22:02:16.8135990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8136085Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8136385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8136487Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8136780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8136888Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8137180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.8137340Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.8137361Z 2025-08-14T22:02:16.8137488Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8137734Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8137853Z return mod(**inputs) 2025-08-14T22:02:16.8138148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8138238Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8138538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8138628Z layer_outputs = layer_module( 2025-08-14T22:02:16.8138916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8139016Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8139309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8139414Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8139707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8139812Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8140109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.8140241Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.8140253Z 2025-08-14T22:02:16.8140381Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8140630Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8140713Z return mod(**inputs) 2025-08-14T22:02:16.8141015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8141105Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8141422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8141517Z layer_outputs = layer_module( 2025-08-14T22:02:16.8141797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8141901Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8142196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8142296Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8142617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8142721Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8143021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.8143121Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.8143135Z 2025-08-14T22:02:16.8143263Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8143515Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8143598Z return mod(**inputs) 2025-08-14T22:02:16.8143896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8143994Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8144290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8144390Z layer_outputs = layer_module( 2025-08-14T22:02:16.8144688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8144784Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8145091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8145229Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8145526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8145752Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8146067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:16.8146204Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:16.8146217Z 2025-08-14T22:02:16.8146346Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8146597Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8146686Z return mod(**inputs) 2025-08-14T22:02:16.8146987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8147091Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8147388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8147476Z layer_outputs = layer_module( 2025-08-14T22:02:16.8147768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8147866Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8148158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8148282Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8148578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8149079Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8149541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:16.8149645Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:16.8149658Z 2025-08-14T22:02:16.8149795Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8150045Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8150135Z return mod(**inputs) 2025-08-14T22:02:16.8150436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8150529Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8150867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8150961Z layer_outputs = layer_module( 2025-08-14T22:02:16.8151244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8158901Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8159319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8159439Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8159751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8159914Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8164573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:16.8164851Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:16.8164868Z 2025-08-14T22:02:16.8165017Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8165287Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8165423Z return mod(**inputs) 2025-08-14T22:02:16.8165745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8165845Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8166152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8166254Z layer_outputs = layer_module( 2025-08-14T22:02:16.8166541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8166652Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8166961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8167066Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8167373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8167481Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8167789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.8167888Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.8167902Z 2025-08-14T22:02:16.8168035Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8168297Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8168386Z return mod(**inputs) 2025-08-14T22:02:16.8168691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8168795Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8169134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8169245Z layer_outputs = layer_module( 2025-08-14T22:02:16.8169539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8169639Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8169942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8170044Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8170340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8170455Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8170785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.8170892Z key_states = self.k(current_states) 2025-08-14T22:02:16.8170905Z 2025-08-14T22:02:16.8171036Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8171291Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8171382Z return mod(**inputs) 2025-08-14T22:02:16.8171685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8171789Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8172091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8172184Z layer_outputs = layer_module( 2025-08-14T22:02:16.8172479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8172605Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8172902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8173012Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8173326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8173439Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8173733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.8173896Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.8173909Z 2025-08-14T22:02:16.8174051Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8174305Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8174403Z return mod(**inputs) 2025-08-14T22:02:16.8174791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8174888Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8175260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8175350Z layer_outputs = layer_module( 2025-08-14T22:02:16.8175631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8175741Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8176035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8176147Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8176442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8176546Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8176876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.8176978Z value_states = self.v(current_states) 2025-08-14T22:02:16.8176991Z 2025-08-14T22:02:16.8177124Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8177382Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8177466Z return mod(**inputs) 2025-08-14T22:02:16.8177778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8177871Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8178196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8178299Z layer_outputs = layer_module( 2025-08-14T22:02:16.8178582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8178689Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8178985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8179086Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8179388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8179489Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8179780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.8179927Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.8179940Z 2025-08-14T22:02:16.8180070Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8180369Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8180452Z return mod(**inputs) 2025-08-14T22:02:16.8180753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8180876Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8181176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8181264Z layer_outputs = layer_module( 2025-08-14T22:02:16.8181553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8181652Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8181957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8182058Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8182351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8182466Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8182766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.8182918Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.8182931Z 2025-08-14T22:02:16.8183062Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8183317Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8183411Z return mod(**inputs) 2025-08-14T22:02:16.8183711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8183808Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8184117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8184209Z layer_outputs = layer_module( 2025-08-14T22:02:16.8184521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8184626Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8184922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8185033Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8185328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8185429Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8185753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.8185852Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.8185866Z 2025-08-14T22:02:16.8186002Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8186256Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8186347Z return mod(**inputs) 2025-08-14T22:02:16.8186658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8186750Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8187059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8187149Z layer_outputs = layer_module( 2025-08-14T22:02:16.8187428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8187538Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8187858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8187958Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8188264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 609, in forward 2025-08-14T22:02:16.8188450Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T22:02:16.8188463Z 2025-08-14T22:02:16.8188594Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8188844Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8188926Z return mod(**inputs) 2025-08-14T22:02:16.8193434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8193533Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8193894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8193988Z layer_outputs = layer_module( 2025-08-14T22:02:16.8194274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8194388Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8194686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8194790Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8195097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8195204Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8195507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.8195608Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.8195623Z 2025-08-14T22:02:16.8195749Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8196015Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8196122Z return mod(**inputs) 2025-08-14T22:02:16.8196423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8196522Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8196819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8196917Z layer_outputs = layer_module( 2025-08-14T22:02:16.8197196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8197292Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8197614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8197721Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8198076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8198184Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8198478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.8198583Z key_states = self.k(current_states) 2025-08-14T22:02:16.8198596Z 2025-08-14T22:02:16.8198721Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8198970Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8199060Z return mod(**inputs) 2025-08-14T22:02:16.8199361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8199484Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8199783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8199874Z layer_outputs = layer_module( 2025-08-14T22:02:16.8200165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8200284Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8200580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8200689Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8200984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8201098Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8201472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.8201636Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.8201649Z 2025-08-14T22:02:16.8201786Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8202035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8202126Z return mod(**inputs) 2025-08-14T22:02:16.8202424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8202515Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8202822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8202912Z layer_outputs = layer_module( 2025-08-14T22:02:16.8203194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8203304Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8203670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8203808Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8204157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8204264Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8204567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.8204665Z value_states = self.v(current_states) 2025-08-14T22:02:16.8204677Z 2025-08-14T22:02:16.8204812Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8205065Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8205170Z return mod(**inputs) 2025-08-14T22:02:16.8205485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8205578Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8205878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8205980Z layer_outputs = layer_module( 2025-08-14T22:02:16.8206308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8206417Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8206712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8206815Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8207121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8207249Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8207544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.8207691Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.8207727Z 2025-08-14T22:02:16.8207854Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8208113Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8208196Z return mod(**inputs) 2025-08-14T22:02:16.8208496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8208596Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8208893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8208995Z layer_outputs = layer_module( 2025-08-14T22:02:16.8209279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8209377Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8209683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8209788Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8210084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8210202Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8210498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.8210640Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.8210653Z 2025-08-14T22:02:16.8210780Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8211029Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8211121Z return mod(**inputs) 2025-08-14T22:02:16.8211445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8211543Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8211853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8211943Z layer_outputs = layer_module( 2025-08-14T22:02:16.8212234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8212334Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8212629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8212771Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8213069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8213182Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8213487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.8213585Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.8213597Z 2025-08-14T22:02:16.8213731Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8213978Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8214060Z return mod(**inputs) 2025-08-14T22:02:16.8214371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8214461Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8214768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8214883Z layer_outputs = layer_module( 2025-08-14T22:02:16.8215164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8215288Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8215580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8215702Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8215999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8216144Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8216446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:16.8216548Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:16.8216563Z 2025-08-14T22:02:16.8216697Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8216949Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8217035Z return mod(**inputs) 2025-08-14T22:02:16.8217341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8217433Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8217730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8217827Z layer_outputs = layer_module( 2025-08-14T22:02:16.8222336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8222451Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8222755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8222873Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8223205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8223354Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8223654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:16.8223768Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:16.8223781Z 2025-08-14T22:02:16.8223910Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8224166Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8224252Z return mod(**inputs) 2025-08-14T22:02:16.8224573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8224673Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8224971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8225068Z layer_outputs = layer_module( 2025-08-14T22:02:16.8225348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8225448Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8225749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8225859Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8226152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8226305Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8226624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:16.8226735Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:16.8226749Z 2025-08-14T22:02:16.8226877Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8227145Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8227236Z return mod(**inputs) 2025-08-14T22:02:16.8227537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8227633Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8227929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8228017Z layer_outputs = layer_module( 2025-08-14T22:02:16.8228306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8228406Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8228699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8228808Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8229102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8229210Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8229505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.8229599Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.8229611Z 2025-08-14T22:02:16.8229745Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8229999Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8230084Z return mod(**inputs) 2025-08-14T22:02:16.8230389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8230501Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8230807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8230896Z layer_outputs = layer_module( 2025-08-14T22:02:16.8231178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8231285Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8231577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8231684Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8231998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8232106Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8232404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.8232502Z key_states = self.k(current_states) 2025-08-14T22:02:16.8232515Z 2025-08-14T22:02:16.8232715Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8232976Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8233085Z return mod(**inputs) 2025-08-14T22:02:16.8233414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8233506Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8233802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8233923Z layer_outputs = layer_module( 2025-08-14T22:02:16.8234207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8234307Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8234613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8234742Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8235044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8235144Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8235435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.8235604Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.8235617Z 2025-08-14T22:02:16.8235745Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8236002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8236083Z return mod(**inputs) 2025-08-14T22:02:16.8236382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8236479Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8236777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8236869Z layer_outputs = layer_module( 2025-08-14T22:02:16.8237158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8237310Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8237622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8237723Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8238017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8238149Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8238445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.8238541Z value_states = self.v(current_states) 2025-08-14T22:02:16.8238560Z 2025-08-14T22:02:16.8238685Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8238933Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8239020Z return mod(**inputs) 2025-08-14T22:02:16.8239317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8239427Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8239734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8239823Z layer_outputs = layer_module( 2025-08-14T22:02:16.8240112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8240212Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8240506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8240616Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8240908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8241007Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8241377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.8241553Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.8241566Z 2025-08-14T22:02:16.8241696Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8241945Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8242049Z return mod(**inputs) 2025-08-14T22:02:16.8242357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8242448Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8242756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8242848Z layer_outputs = layer_module( 2025-08-14T22:02:16.8243126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8243233Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8243529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8243627Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8243928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8244028Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8244326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.8244460Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.8244473Z 2025-08-14T22:02:16.8244611Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8244864Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8244956Z return mod(**inputs) 2025-08-14T22:02:16.8245255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8245354Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8245682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8245772Z layer_outputs = layer_module( 2025-08-14T22:02:16.8246064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8246161Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8246457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8246564Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8246857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8246988Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8255980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.8256090Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.8256105Z 2025-08-14T22:02:16.8256263Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8256601Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8256695Z return mod(**inputs) 2025-08-14T22:02:16.8257048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8257137Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8257439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8257527Z layer_outputs = layer_module( 2025-08-14T22:02:16.8257808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8257975Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8258270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8258409Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8258703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8258808Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8259110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.8259204Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.8259217Z 2025-08-14T22:02:16.8259344Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8259600Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8259684Z return mod(**inputs) 2025-08-14T22:02:16.8259994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8260086Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8260385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8260484Z layer_outputs = layer_module( 2025-08-14T22:02:16.8260767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8260869Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8261163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8261262Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8261567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8261742Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8262101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.8262226Z key_states = self.k(current_states) 2025-08-14T22:02:16.8262239Z 2025-08-14T22:02:16.8262366Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8262625Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8262705Z return mod(**inputs) 2025-08-14T22:02:16.8263006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8263100Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8263424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8263515Z layer_outputs = layer_module( 2025-08-14T22:02:16.8263804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8263900Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8264200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8264305Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8264597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8264706Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8264998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.8265168Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.8265180Z 2025-08-14T22:02:16.8265311Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8265583Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8265671Z return mod(**inputs) 2025-08-14T22:02:16.8265972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8266089Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8266452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8266541Z layer_outputs = layer_module( 2025-08-14T22:02:16.8266830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8266926Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8267222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8267332Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8267624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8267728Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8268029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.8268130Z value_states = self.v(current_states) 2025-08-14T22:02:16.8268143Z 2025-08-14T22:02:16.8268279Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8268528Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8268609Z return mod(**inputs) 2025-08-14T22:02:16.8268913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8269005Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8269311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8269402Z layer_outputs = layer_module( 2025-08-14T22:02:16.8269707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8269812Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8270105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8270205Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8270508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8270612Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8270932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.8271068Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.8271080Z 2025-08-14T22:02:16.8271208Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8271470Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8271556Z return mod(**inputs) 2025-08-14T22:02:16.8271861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8271952Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8272250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8272342Z layer_outputs = layer_module( 2025-08-14T22:02:16.8272624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8272721Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8273048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8273148Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8273447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8273569Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8273861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.8274001Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.8274014Z 2025-08-14T22:02:16.8274141Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8274390Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8274479Z return mod(**inputs) 2025-08-14T22:02:16.8274777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8274875Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8275171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8275260Z layer_outputs = layer_module( 2025-08-14T22:02:16.8275544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8275638Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8275939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8276041Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8276417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8276538Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8276874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.8276969Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.8277007Z 2025-08-14T22:02:16.8277142Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8277390Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8277478Z return mod(**inputs) 2025-08-14T22:02:16.8277775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8277865Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8278175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8278261Z layer_outputs = layer_module( 2025-08-14T22:02:16.8278561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8278665Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8278959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8279066Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8279357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 647, in forward 2025-08-14T22:02:16.8279516Z layer_output = hidden_states + self.dropout(attention_output[0]) 2025-08-14T22:02:16.8279530Z 2025-08-14T22:02:16.8279661Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8279907Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8279993Z return mod(**inputs) 2025-08-14T22:02:16.8280291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8280405Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8280711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8280803Z layer_outputs = layer_module( 2025-08-14T22:02:16.8281113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8281309Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8281603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8281721Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8282014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8282158Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8282463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:16.8282564Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:16.8282577Z 2025-08-14T22:02:16.8282711Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8282961Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8283040Z return mod(**inputs) 2025-08-14T22:02:16.8283347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8283437Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8283734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8283828Z layer_outputs = layer_module( 2025-08-14T22:02:16.8284107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8284220Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8284539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8284653Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8284961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8285107Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8285402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:16.8285518Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:16.8285530Z 2025-08-14T22:02:16.8285656Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8285935Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8286022Z return mod(**inputs) 2025-08-14T22:02:16.8286319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8286421Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8286719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8286815Z layer_outputs = layer_module( 2025-08-14T22:02:16.8287092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8287189Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8287487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8287597Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8287891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8288060Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8288354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:16.8288484Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:16.8288497Z 2025-08-14T22:02:16.8288620Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8288868Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8288957Z return mod(**inputs) 2025-08-14T22:02:16.8289255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8289344Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8289653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8289742Z layer_outputs = layer_module( 2025-08-14T22:02:16.8290029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8290129Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8290423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8290530Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8297083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8297199Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8297499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.8297596Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.8297609Z 2025-08-14T22:02:16.8297752Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8298005Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8298090Z return mod(**inputs) 2025-08-14T22:02:16.8298424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8298518Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8298834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8298927Z layer_outputs = layer_module( 2025-08-14T22:02:16.8299208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8299314Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8299629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8299731Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8300034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8300136Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8300442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.8300535Z key_states = self.k(current_states) 2025-08-14T22:02:16.8300548Z 2025-08-14T22:02:16.8300675Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8300931Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8301011Z return mod(**inputs) 2025-08-14T22:02:16.8301316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8301407Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8301732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8301825Z layer_outputs = layer_module( 2025-08-14T22:02:16.8302108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8302229Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8302529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8302627Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8302925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8303023Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8303316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.8303483Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.8303496Z 2025-08-14T22:02:16.8303620Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8303878Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8303962Z return mod(**inputs) 2025-08-14T22:02:16.8304258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8304356Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8304652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8304738Z layer_outputs = layer_module( 2025-08-14T22:02:16.8305022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8305186Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8305488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8305636Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8305954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8306061Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8306357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.8306452Z value_states = self.v(current_states) 2025-08-14T22:02:16.8306472Z 2025-08-14T22:02:16.8306598Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8306846Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8306933Z return mod(**inputs) 2025-08-14T22:02:16.8307250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8307345Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8307652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8307743Z layer_outputs = layer_module( 2025-08-14T22:02:16.8308036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8308136Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8308429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8308535Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8308830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8308937Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8309262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.8309397Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.8309412Z 2025-08-14T22:02:16.8309548Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8309821Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8309909Z return mod(**inputs) 2025-08-14T22:02:16.8310217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8310308Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8310610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8310707Z layer_outputs = layer_module( 2025-08-14T22:02:16.8310989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8311098Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8311394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8311495Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8311794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8311893Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8312194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.8312327Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.8312339Z 2025-08-14T22:02:16.8312464Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8312720Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8312803Z return mod(**inputs) 2025-08-14T22:02:16.8313100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8313228Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8313536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8313634Z layer_outputs = layer_module( 2025-08-14T22:02:16.8313919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8314016Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8314319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8314424Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8314747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8314849Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8315146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.8315248Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.8315261Z 2025-08-14T22:02:16.8315389Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8315635Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8315725Z return mod(**inputs) 2025-08-14T22:02:16.8316021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8316119Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8316424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8316547Z layer_outputs = layer_module( 2025-08-14T22:02:16.8316834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8316933Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8317252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8317358Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8317650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8317762Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8318053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.8318149Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.8318162Z 2025-08-14T22:02:16.8318298Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8318546Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8318634Z return mod(**inputs) 2025-08-14T22:02:16.8318932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8319024Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8319330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8319417Z layer_outputs = layer_module( 2025-08-14T22:02:16.8323967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8324078Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8324426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8324537Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8324832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8324964Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8325265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.8325359Z key_states = self.k(current_states) 2025-08-14T22:02:16.8325372Z 2025-08-14T22:02:16.8325498Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8325751Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8325834Z return mod(**inputs) 2025-08-14T22:02:16.8326139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8326252Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8326553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8326647Z layer_outputs = layer_module( 2025-08-14T22:02:16.8326927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8327034Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8327330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8327430Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8327729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8327833Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8328130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.8328319Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.8328331Z 2025-08-14T22:02:16.8328458Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8328716Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8328822Z return mod(**inputs) 2025-08-14T22:02:16.8329119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8329217Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8329517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8329608Z layer_outputs = layer_module( 2025-08-14T22:02:16.8329894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8329992Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8330298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8330398Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8330693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8330804Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8331097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.8331205Z value_states = self.v(current_states) 2025-08-14T22:02:16.8331217Z 2025-08-14T22:02:16.8331346Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8331592Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8331684Z return mod(**inputs) 2025-08-14T22:02:16.8331983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8332076Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8332412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8332503Z layer_outputs = layer_module( 2025-08-14T22:02:16.8332793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8332889Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8333183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8333290Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8333583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8333708Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8334009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.8334214Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.8334230Z 2025-08-14T22:02:16.8334364Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8334672Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8334758Z return mod(**inputs) 2025-08-14T22:02:16.8335069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8335158Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8335465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8335553Z layer_outputs = layer_module( 2025-08-14T22:02:16.8335835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8335965Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8336261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8336416Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8336735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8336840Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8337142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.8337276Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.8337289Z 2025-08-14T22:02:16.8337416Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8337677Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8337761Z return mod(**inputs) 2025-08-14T22:02:16.8338065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8338156Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8338455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8338556Z layer_outputs = layer_module( 2025-08-14T22:02:16.8338840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8338940Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8339249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8339352Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8339654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8339757Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8340072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.8340176Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.8340189Z 2025-08-14T22:02:16.8340316Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8340562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8340650Z return mod(**inputs) 2025-08-14T22:02:16.8340954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8341056Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8341373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8341462Z layer_outputs = layer_module( 2025-08-14T22:02:16.8341750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8341846Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8342150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8342262Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8342557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8342707Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8342999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:16.8343101Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:16.8343115Z 2025-08-14T22:02:16.8343273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8343521Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8343613Z return mod(**inputs) 2025-08-14T22:02:16.8343908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8344020Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8344327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8344416Z layer_outputs = layer_module( 2025-08-14T22:02:16.8344695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8344800Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8345095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8345218Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8345513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8345655Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8345957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:16.8346061Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:16.8346074Z 2025-08-14T22:02:16.8346207Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8346456Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8346538Z return mod(**inputs) 2025-08-14T22:02:16.8346843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8346935Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8347234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8347355Z layer_outputs = layer_module( 2025-08-14T22:02:16.8347635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8347744Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8348039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8348151Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8348459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8353017Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8353443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:16.8353549Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:16.8353562Z 2025-08-14T22:02:16.8353694Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8353950Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8354034Z return mod(**inputs) 2025-08-14T22:02:16.8354341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8354440Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8354739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8354836Z layer_outputs = layer_module( 2025-08-14T22:02:16.8355118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8355247Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8355552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8355666Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8355995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 343, in forward 2025-08-14T22:02:16.8356160Z hidden_states = hidden_states + self.dropout(forwarded_states) 2025-08-14T22:02:16.8356173Z 2025-08-14T22:02:16.8356299Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8356554Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8356635Z return mod(**inputs) 2025-08-14T22:02:16.8356933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8357031Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8357380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8357479Z layer_outputs = layer_module( 2025-08-14T22:02:16.8357767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8357866Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8358167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8358269Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8358567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8358682Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8358978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.8359083Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.8359096Z 2025-08-14T22:02:16.8359227Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8359512Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8359603Z return mod(**inputs) 2025-08-14T22:02:16.8359907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8360005Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8360305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8360395Z layer_outputs = layer_module( 2025-08-14T22:02:16.8360682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8360799Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8361095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8361259Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8361555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8361664Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8361958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.8362052Z key_states = self.k(current_states) 2025-08-14T22:02:16.8362065Z 2025-08-14T22:02:16.8362199Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8362450Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8362535Z return mod(**inputs) 2025-08-14T22:02:16.8362846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8362956Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8363341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8363458Z layer_outputs = layer_module( 2025-08-14T22:02:16.8363791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8363898Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8364194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8364298Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8364590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8364692Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8364996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.8365155Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.8365170Z 2025-08-14T22:02:16.8365294Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8365547Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8365627Z return mod(**inputs) 2025-08-14T22:02:16.8365985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8366074Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8366371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8366464Z layer_outputs = layer_module( 2025-08-14T22:02:16.8366745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8366843Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8367170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8367276Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8367579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8367678Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8367969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.8368069Z value_states = self.v(current_states) 2025-08-14T22:02:16.8368082Z 2025-08-14T22:02:16.8368209Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8368482Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8368565Z return mod(**inputs) 2025-08-14T22:02:16.8368865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8368962Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8369261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8369350Z layer_outputs = layer_module( 2025-08-14T22:02:16.8369641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8369743Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8370043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8370143Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8370440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8370575Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8370870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.8371023Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.8371043Z 2025-08-14T22:02:16.8371171Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8371419Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8371512Z return mod(**inputs) 2025-08-14T22:02:16.8371811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8371901Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8372207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8372299Z layer_outputs = layer_module( 2025-08-14T22:02:16.8372588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8372689Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8372983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8373089Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8373383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8373486Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8373788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.8373923Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.8373937Z 2025-08-14T22:02:16.8374074Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8374321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8374425Z return mod(**inputs) 2025-08-14T22:02:16.8374735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8374827Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8375134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8375221Z layer_outputs = layer_module( 2025-08-14T22:02:16.8375499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8375611Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8375930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:16.8376032Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:16.8376336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:16.8376439Z attention_output = self.SelfAttention( 2025-08-14T22:02:16.8376743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.8376837Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.8376850Z 2025-08-14T22:02:16.8376976Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8377230Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8377312Z return mod(**inputs) 2025-08-14T22:02:16.8381796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8381909Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8382237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8382336Z layer_outputs = layer_module( 2025-08-14T22:02:16.8382622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8382743Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8383051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8383156Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8383460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8383565Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8383865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:16.8383972Z query_states = self.q(hidden_states) 2025-08-14T22:02:16.8383985Z 2025-08-14T22:02:16.8384112Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8384363Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8384455Z return mod(**inputs) 2025-08-14T22:02:16.8384765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8384862Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8385160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8385246Z layer_outputs = layer_module( 2025-08-14T22:02:16.8385532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8385628Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8385923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8386032Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8386362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8386472Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8386769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:16.8386865Z key_states = self.k(current_states) 2025-08-14T22:02:16.8386878Z 2025-08-14T22:02:16.8387010Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8387257Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8387345Z return mod(**inputs) 2025-08-14T22:02:16.8387667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8387763Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8388070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8388159Z layer_outputs = layer_module( 2025-08-14T22:02:16.8388436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8388539Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8388831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8388937Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8389229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8389332Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8389657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:16.8389818Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:16.8389833Z 2025-08-14T22:02:16.8389984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8390235Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8390316Z return mod(**inputs) 2025-08-14T22:02:16.8390620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8390712Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8391010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8391107Z layer_outputs = layer_module( 2025-08-14T22:02:16.8391394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8391500Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8391795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8391898Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8392267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8392372Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8392716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:16.8392821Z value_states = self.v(current_states) 2025-08-14T22:02:16.8392833Z 2025-08-14T22:02:16.8392960Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8393217Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8393301Z return mod(**inputs) 2025-08-14T22:02:16.8393603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8393731Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8394036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8394127Z layer_outputs = layer_module( 2025-08-14T22:02:16.8394416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8394514Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8394818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8394921Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8395238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8395351Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8395647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:16.8395788Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:16.8395800Z 2025-08-14T22:02:16.8395925Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8396173Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8396263Z return mod(**inputs) 2025-08-14T22:02:16.8396561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8396653Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8397012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8397130Z layer_outputs = layer_module( 2025-08-14T22:02:16.8397416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8397513Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8397832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8397938Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8398230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8398343Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8398635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:16.8398767Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:16.8398782Z 2025-08-14T22:02:16.8398919Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8399165Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8399248Z return mod(**inputs) 2025-08-14T22:02:16.8399556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8399648Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8399957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8400045Z layer_outputs = layer_module( 2025-08-14T22:02:16.8400321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8400424Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8400717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:16.8400821Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:16.8401844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:16.8401961Z attention_output = self.EncDecAttention( 2025-08-14T22:02:16.8402268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:16.8402364Z attn_output = self.o(attn_output) 2025-08-14T22:02:16.8402377Z 2025-08-14T22:02:16.8402514Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8402777Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8402861Z return mod(**inputs) 2025-08-14T22:02:16.8403170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8403290Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8403591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8403695Z layer_outputs = layer_module( 2025-08-14T22:02:16.8403975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8404073Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8404376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8404490Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8404792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8404936Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8405228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:16.8405354Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:16.8405367Z 2025-08-14T22:02:16.8405493Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8405751Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8405854Z return mod(**inputs) 2025-08-14T22:02:16.8406150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8406246Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8406543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8415028Z layer_outputs = layer_module( 2025-08-14T22:02:16.8415415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8415531Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8415935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8416067Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8416460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8416643Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8417035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:16.8417144Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:16.8417165Z 2025-08-14T22:02:16.8417307Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8417628Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8417729Z return mod(**inputs) 2025-08-14T22:02:16.8418139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:16.8418233Z decoder_outputs = self.decoder( 2025-08-14T22:02:16.8418683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:16.8418781Z layer_outputs = layer_module( 2025-08-14T22:02:16.8419079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:16.8419177Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:16.8419469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:16.8419586Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:16.8419902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:16.8420047Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:16.8420352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:16.8420452Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:16.8420466Z 2025-08-14T22:02:16.8420600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8420847Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8420929Z return mod(**inputs) 2025-08-14T22:02:16.8423359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1791, in forward 2025-08-14T22:02:16.8423469Z lm_logits = self.lm_head(sequence_output) 2025-08-14T22:02:16.8423482Z 2025-08-14T22:02:16.8423619Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:16.8423867Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:16.8423975Z return mod(**inputs) 2025-08-14T22:02:16.8424279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1798, in forward 2025-08-14T22:02:16.8424454Z loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) 2025-08-14T22:02:16.8424499Z 2025-08-14T22:02:24.8849368Z Compilation time (from dynamo_timed): 23.743073678 2025-08-14T22:02:24.9090767Z pass 2025-08-14T22:02:24.9092331Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:02:24.9093464Z TIMING: _recursive_pre_grad_passes:0.08294 _recursive_joint_graph_passes:0.81922 _recursive_post_grad_passes:0.26279 async_compile.wait:1.06732 code_gen:6.91207 inductor_compile:11.52809 backend_compile:19.84709 gc:0.00088 entire_frame_compile:23.74307 total_wall_time:23.74307 2025-08-14T22:02:24.9094636Z STATS: call_* op count: 810 | FakeTensorMode.__torch_dispatch__:34635 | FakeTensor.__torch_dispatch__:5221 | ProxyTorchDispatchMode.__torch_dispatch__:8556 2025-08-14T22:02:24.9095269Z Dynamo produced 1 graphs covering 810 ops with 0 graph breaks (0 unique) 2025-08-14T22:02:31.2839485Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T22:02:31.2840666Z from pkg_resources import resource_filename 2025-08-14T22:02:31.9929895Z 2025-08-14T22:02:33.6861065Z loading model: 0it [00:00, ?it/s] 2025-08-14T22:02:33.6861410Z loading model: 0it [00:01, ?it/s] 2025-08-14T22:02:33.6874629Z cpu eval T5Small 2025-08-14T22:02:35.9495623Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:02:36.7895967Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:02:37.8751369Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:02:55.7900169Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.7900809Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.7901231Z return mod(**inputs) 2025-08-14T22:02:55.7901723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.7902419Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.7903044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.7903750Z layer_outputs = layer_module( 2025-08-14T22:02:55.7904255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.7904776Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.7905282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.7910208Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.7910876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.7911488Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.7912243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 546, in forward 2025-08-14T22:02:55.7912901Z position_bias = position_bias + causal_mask 2025-08-14T22:02:55.7913090Z 2025-08-14T22:02:55.7913279Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.7913794Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.7914424Z return mod(**inputs) 2025-08-14T22:02:55.7914954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.7915537Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.7916075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.7916746Z layer_outputs = layer_module( 2025-08-14T22:02:55.7917233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.7917793Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.7918449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.7919022Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.7919581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.7920147Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.7920781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.7921327Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.7921505Z 2025-08-14T22:02:55.7921638Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.7922088Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.7922497Z return mod(**inputs) 2025-08-14T22:02:55.7922919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.7923379Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.7923836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.7924289Z layer_outputs = layer_module( 2025-08-14T22:02:55.7924728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.7925183Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.7925690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.7926159Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.7926618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.7927090Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.7927549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.7928007Z key_states = self.k(current_states) 2025-08-14T22:02:55.7928181Z 2025-08-14T22:02:55.7928348Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.7928803Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.7929306Z return mod(**inputs) 2025-08-14T22:02:55.7929882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.7930448Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.7931005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.7931621Z layer_outputs = layer_module( 2025-08-14T22:02:55.7932147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.7932706Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.7933226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.7933826Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.7934440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.7943482Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.7944208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.7945050Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.7945369Z 2025-08-14T22:02:55.7945567Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.7946236Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.7946678Z return mod(**inputs) 2025-08-14T22:02:55.7947168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.7947820Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.7948365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.7958156Z layer_outputs = layer_module( 2025-08-14T22:02:55.7958866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.7959486Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.7960041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.7960598Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.7961157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.7961818Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.7962379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.7962936Z value_states = self.v(current_states) 2025-08-14T22:02:55.7963132Z 2025-08-14T22:02:55.7963335Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.7964061Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.7964537Z return mod(**inputs) 2025-08-14T22:02:55.7964974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.7965444Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.7965890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.7966349Z layer_outputs = layer_module( 2025-08-14T22:02:55.7966776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.7967221Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.7967739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.7968216Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.7968730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.7969192Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.7969658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.7970162Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.7970444Z 2025-08-14T22:02:55.7970578Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.7971089Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.7971638Z return mod(**inputs) 2025-08-14T22:02:55.7972176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.7972800Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.7973385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.7973937Z layer_outputs = layer_module( 2025-08-14T22:02:55.7974467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.7975029Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.7975563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.7976300Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.7976758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.7977242Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.7977867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.7984743Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.7984957Z 2025-08-14T22:02:55.7985099Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.7985559Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.7985972Z return mod(**inputs) 2025-08-14T22:02:55.7986395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.7986865Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.7987327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.7987795Z layer_outputs = layer_module( 2025-08-14T22:02:55.7988225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.7988679Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.7989207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.7989677Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.7990137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.7990603Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.7991068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.7991530Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.7991705Z 2025-08-14T22:02:55.7991835Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.7992282Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.7992718Z return mod(**inputs) 2025-08-14T22:02:55.7993225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.7993732Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.7994190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.7994647Z layer_outputs = layer_module( 2025-08-14T22:02:55.7995076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.7995528Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.7995995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.7996460Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.7996937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.7998393Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.7998863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.7999326Z value_states = self.v(current_states) 2025-08-14T22:02:55.7999538Z 2025-08-14T22:02:55.7999668Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8000118Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8000518Z return mod(**inputs) 2025-08-14T22:02:55.8000951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8001540Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8002001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8002470Z layer_outputs = layer_module( 2025-08-14T22:02:55.8002911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8003365Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8003825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8004300Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8004769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8005248Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8005707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8006170Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8006339Z 2025-08-14T22:02:55.8006482Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8006925Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8011543Z return mod(**inputs) 2025-08-14T22:02:55.8012020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8012494Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8012944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8013408Z layer_outputs = layer_module( 2025-08-14T22:02:55.8013840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8014291Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8014743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8015244Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8015716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8016182Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8016643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8017103Z key_states = self.k(current_states) 2025-08-14T22:02:55.8017269Z 2025-08-14T22:02:55.8017405Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8017842Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8018244Z return mod(**inputs) 2025-08-14T22:02:55.8018668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8019124Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8019573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8020053Z layer_outputs = layer_module( 2025-08-14T22:02:55.8020481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8020949Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8021407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8021937Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8022452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8022914Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8023383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8023910Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8024143Z 2025-08-14T22:02:55.8024283Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8024725Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8025126Z return mod(**inputs) 2025-08-14T22:02:55.8025559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8026013Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8026466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8026923Z layer_outputs = layer_module( 2025-08-14T22:02:55.8027354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8027795Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8028259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8028726Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8029205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8029675Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8030135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8030635Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8030841Z 2025-08-14T22:02:55.8030970Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8031413Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8031817Z return mod(**inputs) 2025-08-14T22:02:55.8032270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8032732Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8033193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8033656Z layer_outputs = layer_module( 2025-08-14T22:02:55.8034078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8034526Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8034990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8035455Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8035909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8040626Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8041099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8041692Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8041908Z 2025-08-14T22:02:55.8042041Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8042528Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8042948Z return mod(**inputs) 2025-08-14T22:02:55.8043433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8043899Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8044352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8044806Z layer_outputs = layer_module( 2025-08-14T22:02:55.8045224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8045674Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8046136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8046597Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8047060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8047526Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8047982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8048435Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8048604Z 2025-08-14T22:02:55.8049080Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8049538Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8049945Z return mod(**inputs) 2025-08-14T22:02:55.8050388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8050932Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8051517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8051979Z layer_outputs = layer_module( 2025-08-14T22:02:55.8052405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8052845Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8053311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8053787Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8054288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8054757Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8055230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8055695Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8055866Z 2025-08-14T22:02:55.8056004Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8056439Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8056841Z return mod(**inputs) 2025-08-14T22:02:55.8057269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8057725Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8058182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8058637Z layer_outputs = layer_module( 2025-08-14T22:02:55.8059095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8059537Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8059998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8060525Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8060993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8061501Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8062008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:55.8062472Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:55.8062640Z 2025-08-14T22:02:55.8062776Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8063229Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8063631Z return mod(**inputs) 2025-08-14T22:02:55.8064051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8064515Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8064963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8069612Z layer_outputs = layer_module( 2025-08-14T22:02:55.8070088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8070537Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8070999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8071479Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8071948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8072489Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8073002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:55.8073460Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:55.8073639Z 2025-08-14T22:02:55.8073767Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8074208Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8074608Z return mod(**inputs) 2025-08-14T22:02:55.8075028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8075510Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8075969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8076430Z layer_outputs = layer_module( 2025-08-14T22:02:55.8076859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8077314Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8077778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8078251Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8078727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8079237Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8079741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:55.8080332Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:55.8080511Z 2025-08-14T22:02:55.8080638Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8081081Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8081599Z return mod(**inputs) 2025-08-14T22:02:55.8082024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8082489Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8082932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8083380Z layer_outputs = layer_module( 2025-08-14T22:02:55.8083805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8084262Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8084715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8085186Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8085652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8086121Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8086579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8087034Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8087200Z 2025-08-14T22:02:55.8087333Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8087771Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8088162Z return mod(**inputs) 2025-08-14T22:02:55.8088592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8089054Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8089520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8089979Z layer_outputs = layer_module( 2025-08-14T22:02:55.8090405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8090854Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8091304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8091768Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8092226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8092715Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8093176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8093637Z key_states = self.k(current_states) 2025-08-14T22:02:55.8093799Z 2025-08-14T22:02:55.8093935Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8102803Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8103336Z return mod(**inputs) 2025-08-14T22:02:55.8103891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8104488Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8105077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8105685Z layer_outputs = layer_module( 2025-08-14T22:02:55.8106252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8106733Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8107198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8107667Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8108140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8108606Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8109133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8109714Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8109940Z 2025-08-14T22:02:55.8110066Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8110512Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8110916Z return mod(**inputs) 2025-08-14T22:02:55.8111347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8111797Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8112243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8112695Z layer_outputs = layer_module( 2025-08-14T22:02:55.8113118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8113562Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8114022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8114483Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8114935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8115414Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8115892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8116350Z value_states = self.v(current_states) 2025-08-14T22:02:55.8116516Z 2025-08-14T22:02:55.8116643Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8117083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8117479Z return mod(**inputs) 2025-08-14T22:02:55.8117897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8118358Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8118832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8119293Z layer_outputs = layer_module( 2025-08-14T22:02:55.8119711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8120159Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8120614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8121077Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8121601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8122059Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8122513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8123007Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8123217Z 2025-08-14T22:02:55.8123427Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8123948Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8124344Z return mod(**inputs) 2025-08-14T22:02:55.8124774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8125262Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8125709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8126162Z layer_outputs = layer_module( 2025-08-14T22:02:55.8126588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8127031Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8127499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8128010Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8128477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8128946Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8129405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8129906Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8130115Z 2025-08-14T22:02:55.8130246Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8130686Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8131076Z return mod(**inputs) 2025-08-14T22:02:55.8131500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8131959Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8132408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8132858Z layer_outputs = layer_module( 2025-08-14T22:02:55.8133305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8133753Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8134202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8134666Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8135124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8135589Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8136058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8136522Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8136682Z 2025-08-14T22:02:55.8136816Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8137252Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8137652Z return mod(**inputs) 2025-08-14T22:02:55.8144387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8144859Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8145301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8145766Z layer_outputs = layer_module( 2025-08-14T22:02:55.8146191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8146638Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8147143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8147623Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8148101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8148633Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8149485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:55.8149949Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:55.8150118Z 2025-08-14T22:02:55.8150256Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8150692Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8151096Z return mod(**inputs) 2025-08-14T22:02:55.8151529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8151988Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8152511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8153034Z layer_outputs = layer_module( 2025-08-14T22:02:55.8153460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8153901Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8154362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8154842Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8155316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8155831Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8156334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:55.8156878Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:55.8157051Z 2025-08-14T22:02:55.8157183Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8157625Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8158022Z return mod(**inputs) 2025-08-14T22:02:55.8158437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8158903Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8159354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8159807Z layer_outputs = layer_module( 2025-08-14T22:02:55.8160260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8160715Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8161276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8161775Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8162240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8162746Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8163246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:55.8163699Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:55.8163873Z 2025-08-14T22:02:55.8164001Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8164447Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8164879Z return mod(**inputs) 2025-08-14T22:02:55.8165294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8165749Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8166236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8166684Z layer_outputs = layer_module( 2025-08-14T22:02:55.8171293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8171742Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8172203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8172667Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8173135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8173607Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8174076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8174533Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8174710Z 2025-08-14T22:02:55.8174840Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8175283Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8175683Z return mod(**inputs) 2025-08-14T22:02:55.8176110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8176572Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8177027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8177481Z layer_outputs = layer_module( 2025-08-14T22:02:55.8177913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8178401Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8178858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8179326Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8179791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8180259Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8180719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8181180Z key_states = self.k(current_states) 2025-08-14T22:02:55.8181439Z 2025-08-14T22:02:55.8181578Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8182066Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8182470Z return mod(**inputs) 2025-08-14T22:02:55.8182898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8183361Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8183804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8184265Z layer_outputs = layer_module( 2025-08-14T22:02:55.8184689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8185139Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8185592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8186085Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8186541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8187003Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8187484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8188007Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8188236Z 2025-08-14T22:02:55.8188373Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8188807Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8189204Z return mod(**inputs) 2025-08-14T22:02:55.8189628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8190088Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8190536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8190994Z layer_outputs = layer_module( 2025-08-14T22:02:55.8191418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8191863Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8192320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8192791Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8193250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8193710Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8194173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8194635Z value_states = self.v(current_states) 2025-08-14T22:02:55.8194803Z 2025-08-14T22:02:55.8194932Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8195405Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8200048Z return mod(**inputs) 2025-08-14T22:02:55.8200531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8200992Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8201532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8201995Z layer_outputs = layer_module( 2025-08-14T22:02:55.8202419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8202902Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8203375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8203840Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8204295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8204769Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8205236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8205734Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8205948Z 2025-08-14T22:02:55.8206076Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8206515Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8206914Z return mod(**inputs) 2025-08-14T22:02:55.8207366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8207827Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8208276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8208759Z layer_outputs = layer_module( 2025-08-14T22:02:55.8209183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8209628Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8210090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8210618Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8211145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8211611Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8212071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8212562Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8212772Z 2025-08-14T22:02:55.8212901Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8213341Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8214108Z return mod(**inputs) 2025-08-14T22:02:55.8214529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8214995Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8215448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8215897Z layer_outputs = layer_module( 2025-08-14T22:02:55.8216328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8216777Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8217270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8217733Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8218195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8218667Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8219118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8219575Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8219741Z 2025-08-14T22:02:55.8219873Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8220354Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8220753Z return mod(**inputs) 2025-08-14T22:02:55.8221179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8221640Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8222088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8222545Z layer_outputs = layer_module( 2025-08-14T22:02:55.8222977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8223426Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8223875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8224360Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8229092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8229706Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8230216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:55.8230703Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:55.8230873Z 2025-08-14T22:02:55.8231007Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8231444Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8231846Z return mod(**inputs) 2025-08-14T22:02:55.8232275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8232732Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8233176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8233636Z layer_outputs = layer_module( 2025-08-14T22:02:55.8234064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8234505Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8234961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8235436Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8235907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8236405Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8236904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:55.8237371Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:55.8237543Z 2025-08-14T22:02:55.8237682Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8238114Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8238602Z return mod(**inputs) 2025-08-14T22:02:55.8239032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8239555Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8240058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8240514Z layer_outputs = layer_module( 2025-08-14T22:02:55.8240941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8241462Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8241948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8242434Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8242905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8243423Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8243930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:55.8244391Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:55.8244562Z 2025-08-14T22:02:55.8244689Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8245132Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8245539Z return mod(**inputs) 2025-08-14T22:02:55.8245969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8246445Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8246901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8247365Z layer_outputs = layer_module( 2025-08-14T22:02:55.8247809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8248258Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8249032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8249511Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8249966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8250438Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8250906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8251367Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8251546Z 2025-08-14T22:02:55.8251684Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8252144Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8252551Z return mod(**inputs) 2025-08-14T22:02:55.8252972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8253438Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8262206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8262817Z layer_outputs = layer_module( 2025-08-14T22:02:55.8263366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8263965Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8264581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8265182Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8265647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8266112Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8266582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8267041Z key_states = self.k(current_states) 2025-08-14T22:02:55.8267216Z 2025-08-14T22:02:55.8267344Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8267795Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8268236Z return mod(**inputs) 2025-08-14T22:02:55.8270877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8271338Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8271785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8272235Z layer_outputs = layer_module( 2025-08-14T22:02:55.8272660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8273108Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8273559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8274027Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8274488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8275003Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8275462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8275991Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8276253Z 2025-08-14T22:02:55.8276388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8276840Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8277240Z return mod(**inputs) 2025-08-14T22:02:55.8277675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8278145Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8278595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8279064Z layer_outputs = layer_module( 2025-08-14T22:02:55.8279504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8279961Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8280419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8280888Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8281420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8281880Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8282345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8282885Z value_states = self.v(current_states) 2025-08-14T22:02:55.8283060Z 2025-08-14T22:02:55.8283216Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8283690Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8284102Z return mod(**inputs) 2025-08-14T22:02:55.8284567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8285032Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8285477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8285936Z layer_outputs = layer_module( 2025-08-14T22:02:55.8286365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8286807Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8287269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8287827Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8288295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8288754Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8289220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8289722Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8289934Z 2025-08-14T22:02:55.8290062Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8290504Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8290902Z return mod(**inputs) 2025-08-14T22:02:55.8291327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8291778Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8292232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8292720Z layer_outputs = layer_module( 2025-08-14T22:02:55.8293142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8293618Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8294078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8294546Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8295009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8295477Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8295940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8296439Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8296644Z 2025-08-14T22:02:55.8296773Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8297220Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8301891Z return mod(**inputs) 2025-08-14T22:02:55.8302317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8302795Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8303256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8303721Z layer_outputs = layer_module( 2025-08-14T22:02:55.8304142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8304596Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8305062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8305527Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8306052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8306524Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8306986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8307445Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8307621Z 2025-08-14T22:02:55.8307751Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8308201Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8308600Z return mod(**inputs) 2025-08-14T22:02:55.8309052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8309530Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8309985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8310435Z layer_outputs = layer_module( 2025-08-14T22:02:55.8310868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8311317Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8311847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8312353Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8312821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 609, in forward 2025-08-14T22:02:55.8313352Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T22:02:55.8313591Z 2025-08-14T22:02:55.8313745Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8314193Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8314598Z return mod(**inputs) 2025-08-14T22:02:55.8315028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8315502Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8315947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8316443Z layer_outputs = layer_module( 2025-08-14T22:02:55.8316865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8317311Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8317777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8318261Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8318726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8319242Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8319753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:55.8320219Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:55.8320389Z 2025-08-14T22:02:55.8320541Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8321010Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8321512Z return mod(**inputs) 2025-08-14T22:02:55.8321934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8322403Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8322855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8323352Z layer_outputs = layer_module( 2025-08-14T22:02:55.8323779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8324235Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8324698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8325182Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8325656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8326172Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8330958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:55.8331419Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:55.8331599Z 2025-08-14T22:02:55.8331730Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8332171Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8332579Z return mod(**inputs) 2025-08-14T22:02:55.8333002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8333464Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8333920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8334373Z layer_outputs = layer_module( 2025-08-14T22:02:55.8334816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8335302Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8335764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8336234Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8336733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8337239Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8337737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:55.8338204Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:55.8338377Z 2025-08-14T22:02:55.8338510Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8338949Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8339342Z return mod(**inputs) 2025-08-14T22:02:55.8339768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8340224Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8340675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8341194Z layer_outputs = layer_module( 2025-08-14T22:02:55.8352664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8353280Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8353776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8354259Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8354737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8355233Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8360059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8360669Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8360846Z 2025-08-14T22:02:55.8360984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8361549Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8361964Z return mod(**inputs) 2025-08-14T22:02:55.8362406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8362874Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8363335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8363839Z layer_outputs = layer_module( 2025-08-14T22:02:55.8364273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8364728Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8365200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8365675Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8366140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8366606Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8367070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8367520Z key_states = self.k(current_states) 2025-08-14T22:02:55.8367702Z 2025-08-14T22:02:55.8367837Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8368289Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8368731Z return mod(**inputs) 2025-08-14T22:02:55.8369156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8369657Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8370212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8370713Z layer_outputs = layer_module( 2025-08-14T22:02:55.8371145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8371593Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8372061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8372525Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8373002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8373473Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8373941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8374458Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8374694Z 2025-08-14T22:02:55.8374824Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8375280Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8375677Z return mod(**inputs) 2025-08-14T22:02:55.8376110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8376580Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8377040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8377490Z layer_outputs = layer_module( 2025-08-14T22:02:55.8377949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8378400Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8378854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8379326Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8379787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8380256Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8380709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8381197Z value_states = self.v(current_states) 2025-08-14T22:02:55.8381370Z 2025-08-14T22:02:55.8381509Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8381955Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8382353Z return mod(**inputs) 2025-08-14T22:02:55.8382789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8383253Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8383703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8384164Z layer_outputs = layer_module( 2025-08-14T22:02:55.8392967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8393563Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8394177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8394848Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8395476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8395973Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8396437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8396948Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8397154Z 2025-08-14T22:02:55.8397296Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8397738Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8398142Z return mod(**inputs) 2025-08-14T22:02:55.8398579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8399110Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8399602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8400068Z layer_outputs = layer_module( 2025-08-14T22:02:55.8400499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8400948Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8401479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8401953Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8402418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8402886Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8403354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8403867Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8404072Z 2025-08-14T22:02:55.8404227Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8404685Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8405096Z return mod(**inputs) 2025-08-14T22:02:55.8405531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8405985Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8406437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8406896Z layer_outputs = layer_module( 2025-08-14T22:02:55.8407345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8407801Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8408269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8408740Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8409203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8409671Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8410136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8410598Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8410763Z 2025-08-14T22:02:55.8410894Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8411342Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8411747Z return mod(**inputs) 2025-08-14T22:02:55.8412198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8412648Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8413097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8413655Z layer_outputs = layer_module( 2025-08-14T22:02:55.8414133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8414584Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8415047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8415527Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8416000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8416518Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8417022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:55.8417488Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:55.8417657Z 2025-08-14T22:02:55.8417784Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8418281Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8418684Z return mod(**inputs) 2025-08-14T22:02:55.8419102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8419570Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8420022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8420480Z layer_outputs = layer_module( 2025-08-14T22:02:55.8420902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8421352Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8421842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8422320Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8422793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8423305Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8423812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:55.8424270Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:55.8424450Z 2025-08-14T22:02:55.8424600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8425046Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8425447Z return mod(**inputs) 2025-08-14T22:02:55.8425869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8426340Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8426795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8427246Z layer_outputs = layer_module( 2025-08-14T22:02:55.8427678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8434501Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8434967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8435444Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8435952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8436476Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8436998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:55.8437471Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:55.8437654Z 2025-08-14T22:02:55.8437786Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8438233Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8438628Z return mod(**inputs) 2025-08-14T22:02:55.8439058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8439525Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8439978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8440428Z layer_outputs = layer_module( 2025-08-14T22:02:55.8440871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8441382Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8441834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8442368Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8442883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8443361Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8443817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8444286Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8444453Z 2025-08-14T22:02:55.8444589Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8445048Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8445449Z return mod(**inputs) 2025-08-14T22:02:55.8445870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8446326Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8446777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8447278Z layer_outputs = layer_module( 2025-08-14T22:02:55.8447699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8448148Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8448637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8449447Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8449906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8450384Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8450845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8451473Z key_states = self.k(current_states) 2025-08-14T22:02:55.8451637Z 2025-08-14T22:02:55.8451764Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8452214Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8452618Z return mod(**inputs) 2025-08-14T22:02:55.8453038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8453559Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8454009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8454461Z layer_outputs = layer_module( 2025-08-14T22:02:55.8454912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8455365Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8455833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8456295Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8456753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8461391Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8461855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8462375Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8462616Z 2025-08-14T22:02:55.8462754Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8463385Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8463984Z return mod(**inputs) 2025-08-14T22:02:55.8464608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8465068Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8465520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8465969Z layer_outputs = layer_module( 2025-08-14T22:02:55.8466400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8466854Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8467375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8467840Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8468301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8468874Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8469592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8470115Z value_states = self.v(current_states) 2025-08-14T22:02:55.8470292Z 2025-08-14T22:02:55.8470423Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8470959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8471462Z return mod(**inputs) 2025-08-14T22:02:55.8471947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8472410Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8472855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8473323Z layer_outputs = layer_module( 2025-08-14T22:02:55.8473755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8474214Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8474675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8475158Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8475639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8476138Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8476598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8477097Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8477322Z 2025-08-14T22:02:55.8477461Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8477898Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8478298Z return mod(**inputs) 2025-08-14T22:02:55.8478723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8479190Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8479631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8480091Z layer_outputs = layer_module( 2025-08-14T22:02:55.8480515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8480959Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8481505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8481974Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8482439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8482897Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8483357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8483854Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8484056Z 2025-08-14T22:02:55.8484193Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8484638Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8485039Z return mod(**inputs) 2025-08-14T22:02:55.8485498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8490208Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8490658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8491118Z layer_outputs = layer_module( 2025-08-14T22:02:55.8491546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8491987Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8492540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8493015Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8493472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8493949Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8494414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8494878Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8495046Z 2025-08-14T22:02:55.8495175Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8495619Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8496022Z return mod(**inputs) 2025-08-14T22:02:55.8496439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8496903Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8497379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8497841Z layer_outputs = layer_module( 2025-08-14T22:02:55.8498263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8498745Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8499210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8499676Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8500133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 609, in forward 2025-08-14T22:02:55.8500765Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T22:02:55.8501019Z 2025-08-14T22:02:55.8501154Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8501591Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8501995Z return mod(**inputs) 2025-08-14T22:02:55.8502419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8502880Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8503319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8503773Z layer_outputs = layer_module( 2025-08-14T22:02:55.8504201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8504643Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8504946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8505062Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8505364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8505509Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8505826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:55.8505938Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:55.8505951Z 2025-08-14T22:02:55.8506079Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8506335Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8506419Z return mod(**inputs) 2025-08-14T22:02:55.8506719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8506817Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8507139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8507231Z layer_outputs = layer_module( 2025-08-14T22:02:55.8507520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8507618Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8507922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8508032Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8508324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8508473Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8508766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:55.8508871Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:55.8508906Z 2025-08-14T22:02:55.8509034Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8509285Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8509391Z return mod(**inputs) 2025-08-14T22:02:55.8509689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T22:02:55.8509782Z encoder_outputs = self.encoder( 2025-08-14T22:02:55.8510087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8510175Z layer_outputs = layer_module( 2025-08-14T22:02:55.8510462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8510558Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8510854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8510975Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8511269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8511412Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8511711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:55.8511813Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:55.8511826Z 2025-08-14T22:02:55.8511960Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8512208Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8512293Z return mod(**inputs) 2025-08-14T22:02:55.8512598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8512690Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8513016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8513104Z layer_outputs = layer_module( 2025-08-14T22:02:55.8513387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8513491Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8513785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8513886Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8514181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8514284Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8514609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8514703Z key_states = self.k(current_states) 2025-08-14T22:02:55.8514716Z 2025-08-14T22:02:55.8519095Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8519382Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8519491Z return mod(**inputs) 2025-08-14T22:02:55.8519794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8519891Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8520186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8520282Z layer_outputs = layer_module( 2025-08-14T22:02:55.8520564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8520688Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8520994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8521100Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8521495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8521600Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8521893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8522059Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8522072Z 2025-08-14T22:02:55.8522197Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8522447Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8522536Z return mod(**inputs) 2025-08-14T22:02:55.8522833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8522931Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8523231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8523324Z layer_outputs = layer_module( 2025-08-14T22:02:55.8523610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8523708Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8524005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8524105Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8524400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8524513Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8524840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8524941Z value_states = self.v(current_states) 2025-08-14T22:02:55.8524954Z 2025-08-14T22:02:55.8525095Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8525347Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8525435Z return mod(**inputs) 2025-08-14T22:02:55.8525735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8525825Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8526152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8526244Z layer_outputs = layer_module( 2025-08-14T22:02:55.8526523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8526630Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8526922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8527027Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8527322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8527426Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8527722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8527856Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8527869Z 2025-08-14T22:02:55.8528000Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8528269Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8528350Z return mod(**inputs) 2025-08-14T22:02:55.8528655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8528766Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8529064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8529157Z layer_outputs = layer_module( 2025-08-14T22:02:55.8529506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8529612Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8529963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8530063Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8530360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8530462Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8530752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8530890Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8530903Z 2025-08-14T22:02:55.8531027Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8531284Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8531364Z return mod(**inputs) 2025-08-14T22:02:55.8531659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8531763Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8532059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8532152Z layer_outputs = layer_module( 2025-08-14T22:02:55.8532454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8532558Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8532859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8532958Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8533249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8533358Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8533673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8533782Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8533796Z 2025-08-14T22:02:55.8533924Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8534173Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8534267Z return mod(**inputs) 2025-08-14T22:02:55.8534565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8534654Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8534959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8535049Z layer_outputs = layer_module( 2025-08-14T22:02:55.8535334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8535435Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8535775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8535896Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8536192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8536367Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8536660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:55.8536758Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:55.8536770Z 2025-08-14T22:02:55.8536902Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8537150Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8537231Z return mod(**inputs) 2025-08-14T22:02:55.8537537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8537629Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8537944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8538033Z layer_outputs = layer_module( 2025-08-14T22:02:55.8538312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8538415Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8538705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8538824Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8539115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8539258Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8539562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:55.8539682Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:55.8539695Z 2025-08-14T22:02:55.8539823Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8540079Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8540159Z return mod(**inputs) 2025-08-14T22:02:55.8540470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8540558Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8540853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8540947Z layer_outputs = layer_module( 2025-08-14T22:02:55.8541247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8541348Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8541646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8541757Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8542057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8542198Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8542489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:55.8542599Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:55.8542612Z 2025-08-14T22:02:55.8542740Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8542997Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8543099Z return mod(**inputs) 2025-08-14T22:02:55.8543396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8543515Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8552393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8552492Z layer_outputs = layer_module( 2025-08-14T22:02:55.8552872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8552980Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8553382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8553494Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8553892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8554018Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8554416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8554535Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8554549Z 2025-08-14T22:02:55.8554691Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8555001Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8555092Z return mod(**inputs) 2025-08-14T22:02:55.8555389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8555477Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8555788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8555877Z layer_outputs = layer_module( 2025-08-14T22:02:55.8556219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8556313Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8556607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8556711Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8556999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8557099Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8557393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8557486Z key_states = self.k(current_states) 2025-08-14T22:02:55.8557537Z 2025-08-14T22:02:55.8557672Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8557920Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8558001Z return mod(**inputs) 2025-08-14T22:02:55.8560391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8560489Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8560792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8560879Z layer_outputs = layer_module( 2025-08-14T22:02:55.8561154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8561335Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8561631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8561767Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8562065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8562167Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8562513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8562673Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8562687Z 2025-08-14T22:02:55.8562813Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8563074Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8563155Z return mod(**inputs) 2025-08-14T22:02:55.8563451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8563552Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8563849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8563949Z layer_outputs = layer_module( 2025-08-14T22:02:55.8564229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8564327Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8564628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8564728Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8565026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8565126Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8565419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8565528Z value_states = self.v(current_states) 2025-08-14T22:02:55.8565540Z 2025-08-14T22:02:55.8565665Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8565936Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8566029Z return mod(**inputs) 2025-08-14T22:02:55.8566325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8566426Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8566724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8566811Z layer_outputs = layer_module( 2025-08-14T22:02:55.8567094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8567215Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8567508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8567621Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8567912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8568020Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8568310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8568446Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8568459Z 2025-08-14T22:02:55.8568595Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8568847Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8568938Z return mod(**inputs) 2025-08-14T22:02:55.8569255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8569345Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8569652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8569760Z layer_outputs = layer_module( 2025-08-14T22:02:55.8570036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8570140Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8570431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8570536Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8570828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8570932Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8571233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8571365Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8571381Z 2025-08-14T22:02:55.8571513Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8571760Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8571840Z return mod(**inputs) 2025-08-14T22:02:55.8572147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8572235Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8572529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8572628Z layer_outputs = layer_module( 2025-08-14T22:02:55.8572986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8573094Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8573463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8573565Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8573870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8573972Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8574261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8574365Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8574378Z 2025-08-14T22:02:55.8574504Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8574781Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8574864Z return mod(**inputs) 2025-08-14T22:02:55.8575165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8575265Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8575562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8575658Z layer_outputs = layer_module( 2025-08-14T22:02:55.8575937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8576034Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8576335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8576436Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8576733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8576862Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8577156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8577282Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8577295Z 2025-08-14T22:02:55.8577472Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8577720Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8577805Z return mod(**inputs) 2025-08-14T22:02:55.8578103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8578194Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8578500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8578592Z layer_outputs = layer_module( 2025-08-14T22:02:55.8578876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8578972Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8579265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8579372Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8579663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8579772Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8580063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8580156Z key_states = self.k(current_states) 2025-08-14T22:02:55.8580170Z 2025-08-14T22:02:55.8580304Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8580551Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8580654Z return mod(**inputs) 2025-08-14T22:02:55.8580961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8581053Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8581361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8581450Z layer_outputs = layer_module( 2025-08-14T22:02:55.8581729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8581834Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8582151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8582254Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8582554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8582656Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8582959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8583117Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8583129Z 2025-08-14T22:02:55.8583254Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8583511Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8583592Z return mod(**inputs) 2025-08-14T22:02:55.8583900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8584010Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8584308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8584409Z layer_outputs = layer_module( 2025-08-14T22:02:55.8584688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8584805Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8585106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8585205Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8585502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8585604Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8585896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8586000Z value_states = self.v(current_states) 2025-08-14T22:02:55.8586013Z 2025-08-14T22:02:55.8586140Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8586394Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8586477Z return mod(**inputs) 2025-08-14T22:02:55.8586772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8586873Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8587169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8587260Z layer_outputs = layer_module( 2025-08-14T22:02:55.8591809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8591911Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8592220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8592348Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8592646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8592765Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8593059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8593195Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8593218Z 2025-08-14T22:02:55.8593348Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8593603Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8593728Z return mod(**inputs) 2025-08-14T22:02:55.8594029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8594120Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8594431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8594521Z layer_outputs = layer_module( 2025-08-14T22:02:55.8594806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8594902Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8595192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8595306Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8595597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8595730Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8596029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8596162Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8596196Z 2025-08-14T22:02:55.8596326Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8596572Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8596651Z return mod(**inputs) 2025-08-14T22:02:55.8596957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8597045Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8597341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8597441Z layer_outputs = layer_module( 2025-08-14T22:02:55.8597720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8597825Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8598119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8598220Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8598518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8598619Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8598918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8599012Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8599024Z 2025-08-14T22:02:55.8599146Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8599402Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8599485Z return mod(**inputs) 2025-08-14T22:02:55.8599800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8599900Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8600197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8600290Z layer_outputs = layer_module( 2025-08-14T22:02:55.8600566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8600661Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8600963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8601097Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8601461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8601617Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8601982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:55.8602095Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:55.8602108Z 2025-08-14T22:02:55.8602267Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8602525Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8602616Z return mod(**inputs) 2025-08-14T22:02:55.8602912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8603012Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8603309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8603424Z layer_outputs = layer_module( 2025-08-14T22:02:55.8603715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8603835Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8604126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8604254Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8604546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8604702Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8604996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:55.8605097Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:55.8605112Z 2025-08-14T22:02:55.8605246Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8605497Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8605589Z return mod(**inputs) 2025-08-14T22:02:55.8605888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8605978Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8606282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8606389Z layer_outputs = layer_module( 2025-08-14T22:02:55.8606694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8606802Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8607098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8607219Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8607534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8607678Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8607978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:55.8608078Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:55.8608091Z 2025-08-14T22:02:55.8608222Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8608469Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8608548Z return mod(**inputs) 2025-08-14T22:02:55.8608872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8608964Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8609264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8609361Z layer_outputs = layer_module( 2025-08-14T22:02:55.8609640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8609742Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8610033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8610137Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8610440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8610566Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8610874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8611001Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8611014Z 2025-08-14T22:02:55.8611140Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8611416Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8611497Z return mod(**inputs) 2025-08-14T22:02:55.8611796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8611892Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8612187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8612274Z layer_outputs = layer_module( 2025-08-14T22:02:55.8612562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8612660Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8612961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8613063Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8613356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8613461Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8613750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8613854Z key_states = self.k(current_states) 2025-08-14T22:02:55.8613867Z 2025-08-14T22:02:55.8613996Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8614245Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8614336Z return mod(**inputs) 2025-08-14T22:02:55.8614629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8614717Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8615039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8615128Z layer_outputs = layer_module( 2025-08-14T22:02:55.8615410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8615505Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8615800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8615904Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8616218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8620548Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8620857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8621018Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8621032Z 2025-08-14T22:02:55.8621169Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8621415Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8621497Z return mod(**inputs) 2025-08-14T22:02:55.8621803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8621892Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8622193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8622281Z layer_outputs = layer_module( 2025-08-14T22:02:55.8622589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8622697Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8622987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8623112Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8623412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8623511Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8623809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8623905Z value_states = self.v(current_states) 2025-08-14T22:02:55.8623919Z 2025-08-14T22:02:55.8624047Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8624310Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8624389Z return mod(**inputs) 2025-08-14T22:02:55.8624696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8624787Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8625083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8625179Z layer_outputs = layer_module( 2025-08-14T22:02:55.8625462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8625561Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8625860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8625959Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8626260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8626361Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8626677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8626820Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8626833Z 2025-08-14T22:02:55.8626959Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8627215Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8627297Z return mod(**inputs) 2025-08-14T22:02:55.8627593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8627689Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8628024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8628117Z layer_outputs = layer_module( 2025-08-14T22:02:55.8628406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8628504Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8628806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8628905Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8629194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8629302Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8629596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8629732Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8629766Z 2025-08-14T22:02:55.8629901Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8630151Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8631498Z return mod(**inputs) 2025-08-14T22:02:55.8631796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8631887Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8632194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8632283Z layer_outputs = layer_module( 2025-08-14T22:02:55.8632564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8632669Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8632967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8633075Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8633372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8633482Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8633782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8633876Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8633888Z 2025-08-14T22:02:55.8634026Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8634276Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8634358Z return mod(**inputs) 2025-08-14T22:02:55.8634666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8634756Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8635081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8635175Z layer_outputs = layer_module( 2025-08-14T22:02:55.8635459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8635563Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8635854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8635955Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8636254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 609, in forward 2025-08-14T22:02:55.8636437Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T22:02:55.8636452Z 2025-08-14T22:02:55.8636586Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8636835Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8636917Z return mod(**inputs) 2025-08-14T22:02:55.8637225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8637313Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8637609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8637700Z layer_outputs = layer_module( 2025-08-14T22:02:55.8637980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8638085Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8638380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8638502Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8638800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8638906Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8639218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8639317Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8639330Z 2025-08-14T22:02:55.8639455Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8639705Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8639783Z return mod(**inputs) 2025-08-14T22:02:55.8640082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8640180Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8640477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8640570Z layer_outputs = layer_module( 2025-08-14T22:02:55.8640847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8640950Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8641340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8641448Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8641741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8641850Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8642143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8642244Z key_states = self.k(current_states) 2025-08-14T22:02:55.8642256Z 2025-08-14T22:02:55.8642386Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8642659Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8642748Z return mod(**inputs) 2025-08-14T22:02:55.8643050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8643140Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8643445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8643533Z layer_outputs = layer_module( 2025-08-14T22:02:55.8643821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8643939Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8644237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8644345Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8644637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8644749Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8645045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8645204Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8645216Z 2025-08-14T22:02:55.8649803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8650209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8650304Z return mod(**inputs) 2025-08-14T22:02:55.8650666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8650756Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8651067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8651184Z layer_outputs = layer_module( 2025-08-14T22:02:55.8651463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8651566Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8651857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8651960Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8652262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8652365Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8652666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8652768Z value_states = self.v(current_states) 2025-08-14T22:02:55.8652783Z 2025-08-14T22:02:55.8652909Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8653168Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8653250Z return mod(**inputs) 2025-08-14T22:02:55.8653555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8653645Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8653946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8654046Z layer_outputs = layer_module( 2025-08-14T22:02:55.8654329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8654426Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8654761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8654863Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8655165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8655266Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8655558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8655696Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8655709Z 2025-08-14T22:02:55.8655832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8656119Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8656202Z return mod(**inputs) 2025-08-14T22:02:55.8656501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8656600Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8656897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8656984Z layer_outputs = layer_module( 2025-08-14T22:02:55.8657270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8657369Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8657669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8657769Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8658086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8658194Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8658486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8658639Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8658658Z 2025-08-14T22:02:55.8658782Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8659029Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8659115Z return mod(**inputs) 2025-08-14T22:02:55.8659409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8659497Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8659878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8659972Z layer_outputs = layer_module( 2025-08-14T22:02:55.8660297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8660407Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8660701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8660805Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8661096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8661195Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8661495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8661592Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8661606Z 2025-08-14T22:02:55.8661736Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8661982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8662095Z return mod(**inputs) 2025-08-14T22:02:55.8662401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8662490Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8662788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8662883Z layer_outputs = layer_module( 2025-08-14T22:02:55.8663162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8663263Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8663574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8663690Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8663997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8664143Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8664445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:55.8664543Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:55.8664555Z 2025-08-14T22:02:55.8664679Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8664934Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8665014Z return mod(**inputs) 2025-08-14T22:02:55.8665313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8665448Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8665742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8665835Z layer_outputs = layer_module( 2025-08-14T22:02:55.8666139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8666238Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8666544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8666657Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8666957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8667099Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8667394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:55.8667502Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:55.8667514Z 2025-08-14T22:02:55.8667643Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8667892Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8667982Z return mod(**inputs) 2025-08-14T22:02:55.8668279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8668377Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8668677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8668768Z layer_outputs = layer_module( 2025-08-14T22:02:55.8669059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8669156Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8669471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8669588Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8669883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8670033Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8670328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:55.8670426Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:55.8670438Z 2025-08-14T22:02:55.8670572Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8670845Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8670934Z return mod(**inputs) 2025-08-14T22:02:55.8671232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8671323Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8671628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8671722Z layer_outputs = layer_module( 2025-08-14T22:02:55.8672000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8672106Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8672398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8672504Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8672801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8672923Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8673230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8673327Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8673363Z 2025-08-14T22:02:55.8673497Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8673744Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8673825Z return mod(**inputs) 2025-08-14T22:02:55.8674131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8674223Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8678725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8678874Z layer_outputs = layer_module( 2025-08-14T22:02:55.8679182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8679291Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8679590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8679695Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8680008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8680114Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8680406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8680509Z key_states = self.k(current_states) 2025-08-14T22:02:55.8680522Z 2025-08-14T22:02:55.8680648Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8680907Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8680990Z return mod(**inputs) 2025-08-14T22:02:55.8681381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8681481Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8681779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8681875Z layer_outputs = layer_module( 2025-08-14T22:02:55.8682152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8682250Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8682553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8682695Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8682989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8683096Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8683388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8683557Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8683570Z 2025-08-14T22:02:55.8683693Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8683938Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8684029Z return mod(**inputs) 2025-08-14T22:02:55.8684330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8684422Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8684749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8684836Z layer_outputs = layer_module( 2025-08-14T22:02:55.8685120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8685241Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8685532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8685636Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8685930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8686037Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8686328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8686427Z value_states = self.v(current_states) 2025-08-14T22:02:55.8686439Z 2025-08-14T22:02:55.8686575Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8686823Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8686904Z return mod(**inputs) 2025-08-14T22:02:55.8687207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8687295Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8687596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8687682Z layer_outputs = layer_module( 2025-08-14T22:02:55.8687959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8688063Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8688355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8688455Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8688846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8688958Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8689308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8689442Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8689455Z 2025-08-14T22:02:55.8689580Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8689834Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8689914Z return mod(**inputs) 2025-08-14T22:02:55.8690242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8690336Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8690635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8690735Z layer_outputs = layer_module( 2025-08-14T22:02:55.8691014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8691111Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8691412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8691510Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8691810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8691910Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8692232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8692375Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8692387Z 2025-08-14T22:02:55.8692513Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8692809Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8692890Z return mod(**inputs) 2025-08-14T22:02:55.8693189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8693288Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8693586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8693673Z layer_outputs = layer_module( 2025-08-14T22:02:55.8693961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8694065Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8694366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8694466Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8694758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8694870Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8695166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8695261Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8695280Z 2025-08-14T22:02:55.8695405Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8695654Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8695743Z return mod(**inputs) 2025-08-14T22:02:55.8696040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8696149Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8696456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8696545Z layer_outputs = layer_module( 2025-08-14T22:02:55.8696831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8696928Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8697221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8697330Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8697645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8697752Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8698052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8698147Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8698162Z 2025-08-14T22:02:55.8698293Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8698538Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8698617Z return mod(**inputs) 2025-08-14T22:02:55.8698920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8699014Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8699311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8699428Z layer_outputs = layer_module( 2025-08-14T22:02:55.8699706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8699810Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8700107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8700228Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8700527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8700630Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8700929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8701020Z key_states = self.k(current_states) 2025-08-14T22:02:55.8701033Z 2025-08-14T22:02:55.8701157Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8701414Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8701494Z return mod(**inputs) 2025-08-14T22:02:55.8701792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8701888Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8702186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8702281Z layer_outputs = layer_module( 2025-08-14T22:02:55.8702561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8702658Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8702957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8703057Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8711786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8711940Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8712336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8712540Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8712555Z 2025-08-14T22:02:55.8712696Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8713023Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8713115Z return mod(**inputs) 2025-08-14T22:02:55.8713495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8713623Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8713925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8714013Z layer_outputs = layer_module( 2025-08-14T22:02:55.8714302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8714399Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8725201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8725398Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8725761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8725880Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8726220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8726411Z value_states = self.v(current_states) 2025-08-14T22:02:55.8726425Z 2025-08-14T22:02:55.8726572Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8726849Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8726969Z return mod(**inputs) 2025-08-14T22:02:55.8727280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8727385Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8727689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8727791Z layer_outputs = layer_module( 2025-08-14T22:02:55.8728077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8728179Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8728492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8728599Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8728898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8729015Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8729307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8729450Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8729463Z 2025-08-14T22:02:55.8729594Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8729851Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8729944Z return mod(**inputs) 2025-08-14T22:02:55.8730245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8730348Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8730673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8730766Z layer_outputs = layer_module( 2025-08-14T22:02:55.8731057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8731155Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8731453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8731562Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8731855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8731996Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8732437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8732577Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8732592Z 2025-08-14T22:02:55.8732761Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8733052Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8733142Z return mod(**inputs) 2025-08-14T22:02:55.8733439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8733531Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8733837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8733926Z layer_outputs = layer_module( 2025-08-14T22:02:55.8734208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8734341Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8734635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8734769Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8735067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8735172Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8735470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8735568Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8735582Z 2025-08-14T22:02:55.8735717Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8735968Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8736051Z return mod(**inputs) 2025-08-14T22:02:55.8736358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8736451Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8736751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8736853Z layer_outputs = layer_module( 2025-08-14T22:02:55.8737196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8737304Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8737597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8737695Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8737999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 647, in forward 2025-08-14T22:02:55.8738162Z layer_output = hidden_states + self.dropout(attention_output[0]) 2025-08-14T22:02:55.8738175Z 2025-08-14T22:02:55.8738329Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8738589Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8738673Z return mod(**inputs) 2025-08-14T22:02:55.8738978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8739070Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8739369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8739468Z layer_outputs = layer_module( 2025-08-14T22:02:55.8739771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8739871Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8740173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8740288Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8740592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8740741Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8741033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:55.8741145Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:55.8741158Z 2025-08-14T22:02:55.8741286Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8741547Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8741663Z return mod(**inputs) 2025-08-14T22:02:55.8741960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8742058Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8742361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8742471Z layer_outputs = layer_module( 2025-08-14T22:02:55.8742769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8742866Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8743169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8743282Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8743578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8743733Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8744028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:55.8744141Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:55.8744157Z 2025-08-14T22:02:55.8744282Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8744530Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8744621Z return mod(**inputs) 2025-08-14T22:02:55.8744923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8745015Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8745321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8745416Z layer_outputs = layer_module( 2025-08-14T22:02:55.8745704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8745821Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8746115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8746236Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8746528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8746671Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8751380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:55.8751489Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:55.8751503Z 2025-08-14T22:02:55.8751706Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8751962Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8752045Z return mod(**inputs) 2025-08-14T22:02:55.8752357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8752451Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8752758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8752848Z layer_outputs = layer_module( 2025-08-14T22:02:55.8753130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8753235Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8753529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8753665Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8753969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8754076Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8754403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8754500Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8754513Z 2025-08-14T22:02:55.8754640Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8754899Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8754982Z return mod(**inputs) 2025-08-14T22:02:55.8755287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8755381Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8755679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8755773Z layer_outputs = layer_module( 2025-08-14T22:02:55.8756056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8756153Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8756454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8756560Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8756861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8756964Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8757261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8757367Z key_states = self.k(current_states) 2025-08-14T22:02:55.8757380Z 2025-08-14T22:02:55.8757507Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8757783Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8757875Z return mod(**inputs) 2025-08-14T22:02:55.8758171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8758269Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8758565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8758655Z layer_outputs = layer_module( 2025-08-14T22:02:55.8758944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8759045Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8759375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8759480Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8759776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8759891Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8760188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8760350Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8760363Z 2025-08-14T22:02:55.8760500Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8760747Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8760835Z return mod(**inputs) 2025-08-14T22:02:55.8761135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8761404Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8761741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8761876Z layer_outputs = layer_module( 2025-08-14T22:02:55.8762157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8762262Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8762559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8762673Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8762966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8763074Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8763378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8763474Z value_states = self.v(current_states) 2025-08-14T22:02:55.8763488Z 2025-08-14T22:02:55.8763629Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8763883Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8763970Z return mod(**inputs) 2025-08-14T22:02:55.8764275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8764365Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8764661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8764760Z layer_outputs = layer_module( 2025-08-14T22:02:55.8765041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8765152Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8765472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8765575Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8765923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8766030Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8766324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8766466Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8766479Z 2025-08-14T22:02:55.8766604Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8766883Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8766968Z return mod(**inputs) 2025-08-14T22:02:55.8767266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8767369Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8767668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8767770Z layer_outputs = layer_module( 2025-08-14T22:02:55.8768054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8768156Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8768456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8768555Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8768857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8768981Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8769271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8769466Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8769478Z 2025-08-14T22:02:55.8769603Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8769875Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8769982Z return mod(**inputs) 2025-08-14T22:02:55.8770282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8770380Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8770677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8770767Z layer_outputs = layer_module( 2025-08-14T22:02:55.8771058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8771155Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8771447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8771556Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8771852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8771960Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8772250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8772343Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8772356Z 2025-08-14T22:02:55.8772494Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8772739Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8772832Z return mod(**inputs) 2025-08-14T22:02:55.8773154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8773248Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8773550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8773635Z layer_outputs = layer_module( 2025-08-14T22:02:55.8773913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8774014Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8774339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8774447Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8774736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8774838Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8775144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8775238Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8775251Z 2025-08-14T22:02:55.8775385Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8775630Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8775709Z return mod(**inputs) 2025-08-14T22:02:55.8780200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8780296Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8780620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8780716Z layer_outputs = layer_module( 2025-08-14T22:02:55.8780997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8781120Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8781413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8781514Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8781818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8781921Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8782213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8782320Z key_states = self.k(current_states) 2025-08-14T22:02:55.8782333Z 2025-08-14T22:02:55.8782465Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8782720Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8782804Z return mod(**inputs) 2025-08-14T22:02:55.8783100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8783197Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8783493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8783588Z layer_outputs = layer_module( 2025-08-14T22:02:55.8783865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8783960Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8784265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8784368Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8784682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8784793Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8785086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8785253Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8785266Z 2025-08-14T22:02:55.8785393Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8785639Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8785727Z return mod(**inputs) 2025-08-14T22:02:55.8786050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8786143Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8786446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8786538Z layer_outputs = layer_module( 2025-08-14T22:02:55.8786824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8786924Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8787216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8787323Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8787613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8787723Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8788034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8788130Z value_states = self.v(current_states) 2025-08-14T22:02:55.8788143Z 2025-08-14T22:02:55.8788278Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8788547Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8788627Z return mod(**inputs) 2025-08-14T22:02:55.8788933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8789023Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8789330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8789421Z layer_outputs = layer_module( 2025-08-14T22:02:55.8789701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8789810Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8790108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8790208Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8790585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8790697Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8791047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8791178Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8791191Z 2025-08-14T22:02:55.8791319Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8791575Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8791659Z return mod(**inputs) 2025-08-14T22:02:55.8791969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8792081Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8792384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8792481Z layer_outputs = layer_module( 2025-08-14T22:02:55.8792761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8792858Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8793163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8793265Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8793592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8793696Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8793991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8794130Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8794142Z 2025-08-14T22:02:55.8794270Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8794524Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8794603Z return mod(**inputs) 2025-08-14T22:02:55.8794901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8795001Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8795302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8795411Z layer_outputs = layer_module( 2025-08-14T22:02:55.8795694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8795790Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8796109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8796206Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8796502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8796608Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8796898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8796991Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8797015Z 2025-08-14T22:02:55.8797152Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8797402Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8797483Z return mod(**inputs) 2025-08-14T22:02:55.8797787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8797879Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8798178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8798272Z layer_outputs = layer_module( 2025-08-14T22:02:55.8798548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8798650Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8798948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8799063Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8799361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8799523Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8799826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:55.8799925Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:55.8799938Z 2025-08-14T22:02:55.8800064Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8800318Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8800397Z return mod(**inputs) 2025-08-14T22:02:55.8800692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8800810Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8801109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8801202Z layer_outputs = layer_module( 2025-08-14T22:02:55.8801573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8801672Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8801971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8802081Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8802372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8802521Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8802816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:55.8802950Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:55.8802963Z 2025-08-14T22:02:55.8803087Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8803337Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8803449Z return mod(**inputs) 2025-08-14T22:02:55.8803747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8810899Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8811219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8811308Z layer_outputs = layer_module( 2025-08-14T22:02:55.8811587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8811699Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8811998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8812118Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8812411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8812551Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8812851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:55.8812981Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:55.8812995Z 2025-08-14T22:02:55.8813122Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8813379Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8813462Z return mod(**inputs) 2025-08-14T22:02:55.8813769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8813858Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8814203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8814300Z layer_outputs = layer_module( 2025-08-14T22:02:55.8814582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8814682Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8814985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8815097Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8815422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 343, in forward 2025-08-14T22:02:55.8815581Z hidden_states = hidden_states + self.dropout(forwarded_states) 2025-08-14T22:02:55.8815594Z 2025-08-14T22:02:55.8815723Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8815979Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8816061Z return mod(**inputs) 2025-08-14T22:02:55.8816368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8816458Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8816756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8816850Z layer_outputs = layer_module( 2025-08-14T22:02:55.8817131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8817228Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8817553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8817654Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8817955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8818058Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8818354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8818524Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8818537Z 2025-08-14T22:02:55.8818663Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8818920Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8819001Z return mod(**inputs) 2025-08-14T22:02:55.8819420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8819530Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8819891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8819979Z layer_outputs = layer_module( 2025-08-14T22:02:55.8820269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8820368Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8820668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8820770Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8821059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8821170Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8821464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8821559Z key_states = self.k(current_states) 2025-08-14T22:02:55.8821571Z 2025-08-14T22:02:55.8821739Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8821986Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8822075Z return mod(**inputs) 2025-08-14T22:02:55.8822373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8822475Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8822784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8822871Z layer_outputs = layer_module( 2025-08-14T22:02:55.8823173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8823279Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8823575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8823680Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8823971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8824070Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8824371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8824531Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8824544Z 2025-08-14T22:02:55.8824678Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8824924Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8825025Z return mod(**inputs) 2025-08-14T22:02:55.8825325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8825414Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8825708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8825806Z layer_outputs = layer_module( 2025-08-14T22:02:55.8826080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8826217Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8826507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8826607Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8826910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8827012Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8827308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8827404Z value_states = self.v(current_states) 2025-08-14T22:02:55.8827417Z 2025-08-14T22:02:55.8827541Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8827791Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8827871Z return mod(**inputs) 2025-08-14T22:02:55.8828167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8828260Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8828557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8828654Z layer_outputs = layer_module( 2025-08-14T22:02:55.8828935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8829053Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8829353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8829453Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8829743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8829849Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8830139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8830278Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8830290Z 2025-08-14T22:02:55.8830440Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8830691Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8830780Z return mod(**inputs) 2025-08-14T22:02:55.8831076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8831173Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8831467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8831556Z layer_outputs = layer_module( 2025-08-14T22:02:55.8831842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8831938Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8832237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8832365Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8832661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8832767Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8833058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8833188Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8833201Z 2025-08-14T22:02:55.8833336Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8833627Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8833708Z return mod(**inputs) 2025-08-14T22:02:55.8842475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8842576Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8842990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8843086Z layer_outputs = layer_module( 2025-08-14T22:02:55.8843464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8843581Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8843979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T22:02:55.8844100Z self_attention_outputs = self.layer[0]( 2025-08-14T22:02:55.8844427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T22:02:55.8844527Z attention_output = self.SelfAttention( 2025-08-14T22:02:55.8844826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8844923Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8844936Z 2025-08-14T22:02:55.8845066Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8845352Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8845435Z return mod(**inputs) 2025-08-14T22:02:55.8845743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8845833Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8846131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8846225Z layer_outputs = layer_module( 2025-08-14T22:02:55.8846504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8846622Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8846924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8847023Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8847322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8847431Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8847721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T22:02:55.8847825Z query_states = self.q(hidden_states) 2025-08-14T22:02:55.8847838Z 2025-08-14T22:02:55.8847964Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8848218Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8848363Z return mod(**inputs) 2025-08-14T22:02:55.8848663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8849171Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8849473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8849563Z layer_outputs = layer_module( 2025-08-14T22:02:55.8849857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8849954Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8850303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8850405Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8850698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8850820Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8851115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T22:02:55.8851210Z key_states = self.k(current_states) 2025-08-14T22:02:55.8851237Z 2025-08-14T22:02:55.8851368Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8851619Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8851717Z return mod(**inputs) 2025-08-14T22:02:55.8852013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8852104Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8852403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8852491Z layer_outputs = layer_module( 2025-08-14T22:02:55.8852777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8852875Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8853198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8853307Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8853598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8853699Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8854004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T22:02:55.8854163Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T22:02:55.8854176Z 2025-08-14T22:02:55.8854312Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8854596Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8854680Z return mod(**inputs) 2025-08-14T22:02:55.8854982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8855073Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8855375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8855464Z layer_outputs = layer_module( 2025-08-14T22:02:55.8855739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8855844Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8856135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8856232Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8856529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8856654Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8856952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T22:02:55.8857048Z value_states = self.v(current_states) 2025-08-14T22:02:55.8857061Z 2025-08-14T22:02:55.8857188Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8857442Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8857547Z return mod(**inputs) 2025-08-14T22:02:55.8857843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8857936Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8858235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8858333Z layer_outputs = layer_module( 2025-08-14T22:02:55.8858609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8858708Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8859010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8859109Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8859404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8859505Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8859795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T22:02:55.8859934Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T22:02:55.8859948Z 2025-08-14T22:02:55.8860076Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8860324Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8860415Z return mod(**inputs) 2025-08-14T22:02:55.8860733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8860831Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8861129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8861220Z layer_outputs = layer_module( 2025-08-14T22:02:55.8861506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8861600Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8861913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8862019Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8862310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8862418Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8862714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T22:02:55.8862939Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T22:02:55.8862952Z 2025-08-14T22:02:55.8863087Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8863397Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8863492Z return mod(**inputs) 2025-08-14T22:02:55.8863791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8863885Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8864217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8864309Z layer_outputs = layer_module( 2025-08-14T22:02:55.8864590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8864699Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8864994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T22:02:55.8865129Z cross_attention_outputs = self.layer[1]( 2025-08-14T22:02:55.8865421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T22:02:55.8865522Z attention_output = self.EncDecAttention( 2025-08-14T22:02:55.8865823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T22:02:55.8865920Z attn_output = self.o(attn_output) 2025-08-14T22:02:55.8865933Z 2025-08-14T22:02:55.8866067Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8866319Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8866401Z return mod(**inputs) 2025-08-14T22:02:55.8866709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8866801Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8867102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8867194Z layer_outputs = layer_module( 2025-08-14T22:02:55.8867535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8867641Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8867938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8868050Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8868376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8868526Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8868819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T22:02:55.8868925Z hidden_states = self.wi(hidden_states) 2025-08-14T22:02:55.8868938Z 2025-08-14T22:02:55.8869065Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8869319Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8869399Z return mod(**inputs) 2025-08-14T22:02:55.8869720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8869819Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8870116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8870215Z layer_outputs = layer_module( 2025-08-14T22:02:55.8870492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8870588Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8870890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8871002Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8871292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8871449Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8871759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T22:02:55.8871873Z hidden_states = self.act(hidden_states) 2025-08-14T22:02:55.8871886Z 2025-08-14T22:02:55.8872012Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8872261Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8872347Z return mod(**inputs) 2025-08-14T22:02:55.8872670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T22:02:55.8872762Z decoder_outputs = self.decoder( 2025-08-14T22:02:55.8873063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T22:02:55.8873150Z layer_outputs = layer_module( 2025-08-14T22:02:55.8873438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:55.8873536Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:55.8873830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T22:02:55.8873948Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T22:02:55.8874238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T22:02:55.8874392Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T22:02:55.8874685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T22:02:55.8874784Z hidden_states = self.wo(hidden_states) 2025-08-14T22:02:55.8874797Z 2025-08-14T22:02:55.8874927Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8875173Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8875256Z return mod(**inputs) 2025-08-14T22:02:55.8875580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1791, in forward 2025-08-14T22:02:55.8875685Z lm_logits = self.lm_head(sequence_output) 2025-08-14T22:02:55.8875697Z 2025-08-14T22:02:55.8875827Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:55.8876073Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:55.8876156Z return mod(**inputs) 2025-08-14T22:02:55.8876461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1798, in forward 2025-08-14T22:02:55.8876633Z loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) 2025-08-14T22:02:55.8876647Z 2025-08-14T22:03:02.5040507Z Compilation time (from dynamo_timed): 22.356164099 2025-08-14T22:03:02.5237928Z pass 2025-08-14T22:03:02.5239401Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:03:02.5240546Z TIMING: _recursive_pre_grad_passes:0.08336 _recursive_joint_graph_passes:0.8269 _recursive_post_grad_passes:0.25896 async_compile.wait:0.00753 code_gen:5.50616 inductor_compile:10.11452 backend_compile:18.45402 gc:0.00095 entire_frame_compile:22.35616 total_wall_time:22.35616 2025-08-14T22:03:02.5241823Z STATS: call_* op count: 810 | FakeTensorMode.__torch_dispatch__:34635 | FakeTensor.__torch_dispatch__:5221 | ProxyTorchDispatchMode.__torch_dispatch__:8556 2025-08-14T22:03:02.5242459Z Dynamo produced 1 graphs covering 810 ops with 0 graph breaks (0 unique) 2025-08-14T22:03:09.1481820Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T22:03:09.1483145Z from pkg_resources import resource_filename 2025-08-14T22:03:09.9581156Z 2025-08-14T22:03:14.0078916Z loading model: 0it [00:00, ?it/s] 2025-08-14T22:03:14.0079263Z loading model: 0it [00:04, ?it/s] 2025-08-14T22:03:14.0105333Z cpu eval TrOCRForCausalLM 2025-08-14T22:03:14.2693143Z WARNING:common:fp64 golden ref were not generated for TrOCRForCausalLM. Setting accuracy check to cosine 2025-08-14T22:03:14.3192918Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:03:14.8171222Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:03:15.3261656Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:03:29.9313389Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9313733Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9314009Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9314254Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9314501Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9314760Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9315087Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9315382Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9315629Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9315865Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9318327Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9318721Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9319040Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9319382Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9319690Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9320019Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9320340Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9320660Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9320956Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9321356Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9321931Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9322260Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9322651Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9327494Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9327935Z return mod(**inputs) 2025-08-14T22:03:29.9328424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:03:29.9328918Z outputs = self.model.decoder( 2025-08-14T22:03:29.9329403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:03:29.9329997Z layer_outputs = decoder_layer( 2025-08-14T22:03:29.9330442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:03:29.9330892Z return super().__call__(*args, **kwargs) 2025-08-14T22:03:29.9331587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:03:29.9332439Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:03:29.9333063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:03:29.9333590Z return self.act(input) 2025-08-14T22:03:29.9333781Z 2025-08-14T22:03:29.9333899Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9334261Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9334578Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9334883Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9335309Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9335600Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9335904Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9336277Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9336659Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9337005Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9337274Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9337616Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9338169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9338650Z return mod(**inputs) 2025-08-14T22:03:29.9339119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:03:29.9339614Z outputs = self.model.decoder( 2025-08-14T22:03:29.9340090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:03:29.9340581Z layer_outputs = decoder_layer( 2025-08-14T22:03:29.9341021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:03:29.9341485Z return super().__call__(*args, **kwargs) 2025-08-14T22:03:29.9342210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:03:29.9342818Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:03:29.9343421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:03:29.9343964Z return self.act(input) 2025-08-14T22:03:29.9344129Z 2025-08-14T22:03:29.9344239Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9344522Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9344823Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9345108Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9345393Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9345716Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9346054Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9346363Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9346653Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9346938Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9347250Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9347654Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9348249Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9349236Z return mod(**inputs) 2025-08-14T22:03:29.9349850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:03:29.9350526Z outputs = self.model.decoder( 2025-08-14T22:03:29.9351139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:03:29.9351724Z layer_outputs = decoder_layer( 2025-08-14T22:03:29.9356408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:03:29.9356996Z return super().__call__(*args, **kwargs) 2025-08-14T22:03:29.9357581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:03:29.9358122Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:03:29.9358607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:03:29.9359033Z return self.act(input) 2025-08-14T22:03:29.9359171Z 2025-08-14T22:03:29.9359268Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9359527Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9359832Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9360081Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9360316Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9360559Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9360800Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9361036Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9361370Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9361620Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9361917Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9362198Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9362647Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9363049Z return mod(**inputs) 2025-08-14T22:03:29.9363506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:03:29.9363996Z outputs = self.model.decoder( 2025-08-14T22:03:29.9364469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:03:29.9364947Z layer_outputs = decoder_layer( 2025-08-14T22:03:29.9365381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:03:29.9365838Z return super().__call__(*args, **kwargs) 2025-08-14T22:03:29.9366373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:03:29.9366999Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:03:29.9367490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:03:29.9367916Z return self.act(input) 2025-08-14T22:03:29.9368053Z 2025-08-14T22:03:29.9368150Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9368410Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9368658Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9368900Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9369200Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9369443Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9369695Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9369931Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9370174Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9370479Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9370728Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9371010Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9371465Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9371869Z return mod(**inputs) 2025-08-14T22:03:29.9372353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:03:29.9372842Z outputs = self.model.decoder( 2025-08-14T22:03:29.9373321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:03:29.9373795Z layer_outputs = decoder_layer( 2025-08-14T22:03:29.9374225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:03:29.9374680Z return super().__call__(*args, **kwargs) 2025-08-14T22:03:29.9375159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:03:29.9375697Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:03:29.9376176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:03:29.9376666Z return self.act(input) 2025-08-14T22:03:29.9376809Z 2025-08-14T22:03:29.9376905Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9377157Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9377405Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9377644Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9377888Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9378137Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9378388Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9378623Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9378905Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9379146Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9379387Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9379660Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9380110Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9380509Z return mod(**inputs) 2025-08-14T22:03:29.9381018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:03:29.9385764Z outputs = self.model.decoder( 2025-08-14T22:03:29.9386242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:03:29.9386721Z layer_outputs = decoder_layer( 2025-08-14T22:03:29.9387160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:03:29.9387615Z return super().__call__(*args, **kwargs) 2025-08-14T22:03:29.9388093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:03:29.9388633Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:03:29.9389124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:03:29.9389558Z return self.act(input) 2025-08-14T22:03:29.9389693Z 2025-08-14T22:03:29.9389789Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9390071Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9390331Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9390575Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9390823Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9391108Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9391361Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9391612Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9391848Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9392101Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9392347Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9392632Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9393115Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9393529Z return mod(**inputs) 2025-08-14T22:03:29.9393987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:03:29.9394467Z outputs = self.model.decoder( 2025-08-14T22:03:29.9394946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:03:29.9395483Z layer_outputs = decoder_layer( 2025-08-14T22:03:29.9395990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:03:29.9396437Z return super().__call__(*args, **kwargs) 2025-08-14T22:03:29.9396924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:03:29.9397461Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:03:29.9397961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:03:29.9398384Z return self.act(input) 2025-08-14T22:03:29.9398526Z 2025-08-14T22:03:29.9398623Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9398878Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9399122Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9399367Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9399662Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9399944Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9400192Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9400437Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9400675Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9400920Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9401160Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9401527Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9401975Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9402377Z return mod(**inputs) 2025-08-14T22:03:29.9402828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:03:29.9403311Z outputs = self.model.decoder( 2025-08-14T22:03:29.9403854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:03:29.9404340Z layer_outputs = decoder_layer( 2025-08-14T22:03:29.9404776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:03:29.9405221Z return super().__call__(*args, **kwargs) 2025-08-14T22:03:29.9405711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:03:29.9406249Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:03:29.9406728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:03:29.9407186Z return self.act(input) 2025-08-14T22:03:29.9407333Z 2025-08-14T22:03:29.9407430Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9407683Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9407937Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9408184Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9408438Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9408677Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9408924Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9409170Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9409407Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9409683Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9409989Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9418664Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9419253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9419795Z return mod(**inputs) 2025-08-14T22:03:29.9420386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:03:29.9421025Z outputs = self.model.decoder( 2025-08-14T22:03:29.9421660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:03:29.9422223Z layer_outputs = decoder_layer( 2025-08-14T22:03:29.9422660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:03:29.9423106Z return super().__call__(*args, **kwargs) 2025-08-14T22:03:29.9423594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:03:29.9424154Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:03:29.9426828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:03:29.9427256Z return self.act(input) 2025-08-14T22:03:29.9427399Z 2025-08-14T22:03:29.9427497Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9427749Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9427992Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9428276Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9428523Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9428757Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9429004Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9429246Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9429487Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9429734Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9429974Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9439367Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9439973Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9440388Z return mod(**inputs) 2025-08-14T22:03:29.9440879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:03:29.9441464Z outputs = self.model.decoder( 2025-08-14T22:03:29.9441954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:03:29.9442437Z layer_outputs = decoder_layer( 2025-08-14T22:03:29.9442885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:03:29.9443349Z return super().__call__(*args, **kwargs) 2025-08-14T22:03:29.9443854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:03:29.9444476Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:03:29.9444973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:03:29.9445471Z return self.act(input) 2025-08-14T22:03:29.9445616Z 2025-08-14T22:03:29.9445719Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9445983Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9446238Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9446489Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9446727Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9446978Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9447226Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9447497Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9447755Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9448005Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9448243Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9448541Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9449390Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9449799Z return mod(**inputs) 2025-08-14T22:03:29.9450260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:03:29.9450758Z outputs = self.model.decoder( 2025-08-14T22:03:29.9451242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:03:29.9451720Z layer_outputs = decoder_layer( 2025-08-14T22:03:29.9452171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:03:29.9452704Z return super().__call__(*args, **kwargs) 2025-08-14T22:03:29.9453196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:03:29.9457966Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:03:29.9458461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:03:29.9458888Z return self.act(input) 2025-08-14T22:03:29.9459081Z 2025-08-14T22:03:29.9459185Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9459448Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9459702Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9459948Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9460199Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9460445Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9460699Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9460939Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9461184Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9461438Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9461675Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9461959Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9462414Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9462815Z return mod(**inputs) 2025-08-14T22:03:29.9463270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:03:29.9463758Z outputs = self.model.decoder( 2025-08-14T22:03:29.9464234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:03:29.9464706Z layer_outputs = decoder_layer( 2025-08-14T22:03:29.9465144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:03:29.9465602Z return super().__call__(*args, **kwargs) 2025-08-14T22:03:29.9466119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:03:29.9466663Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:03:29.9467153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:03:29.9467581Z return self.act(input) 2025-08-14T22:03:29.9467722Z 2025-08-14T22:03:29.9467858Z cudagraph partition due to non gpu ops 2025-08-14T22:03:29.9468242Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9468699Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9469149Z return mod(**inputs) 2025-08-14T22:03:29.9469598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 839, in forward 2025-08-14T22:03:29.9470113Z logits = self.output_projection(outputs[0]) 2025-08-14T22:03:29.9470301Z 2025-08-14T22:03:29.9470444Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:03:29.9470882Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:03:29.9471289Z return mod(**inputs) 2025-08-14T22:03:29.9471754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 844, in forward 2025-08-14T22:03:29.9472386Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T22:03:29.9472636Z 2025-08-14T22:03:36.7247714Z Compilation time (from dynamo_timed): 19.590803654 2025-08-14T22:03:36.7292781Z pass 2025-08-14T22:03:36.7293871Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:03:36.7295030Z TIMING: _recursive_pre_grad_passes:0.0578 _recursive_joint_graph_passes:0.89463 _recursive_post_grad_passes:0.10674 async_compile.wait:0.94996 code_gen:6.25008 inductor_compile:9.91879 backend_compile:16.42353 gc:0.00225 entire_frame_compile:19.5908 total_wall_time:19.5908 2025-08-14T22:03:36.7296259Z STATS: call_* op count: 443 | FakeTensorMode.__torch_dispatch__:26118 | FakeTensor.__torch_dispatch__:3895 | ProxyTorchDispatchMode.__torch_dispatch__:6287 2025-08-14T22:03:36.7296900Z Dynamo produced 1 graphs covering 443 ops with 0 graph breaks (0 unique) 2025-08-14T22:03:43.2592001Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T22:03:43.2599519Z from pkg_resources import resource_filename 2025-08-14T22:03:43.9985866Z 2025-08-14T22:03:54.2014255Z loading model: 0it [00:00, ?it/s] 2025-08-14T22:03:54.2014766Z loading model: 0it [00:10, ?it/s] 2025-08-14T22:03:54.2045675Z cpu eval XGLMForCausalLM 2025-08-14T22:03:54.7709310Z WARNING:common:fp64 golden ref were not generated for XGLMForCausalLM. Setting accuracy check to cosine 2025-08-14T22:03:54.9066185Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:03:55.8432273Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:03:56.8683164Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:04:24.0934206Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.0934556Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.0934862Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.0935356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.0935839Z return mod(**inputs) 2025-08-14T22:04:24.0937048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.0937925Z outputs = self.model( 2025-08-14T22:04:24.0938769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.0939676Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.0940507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.0950266Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.0951242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.0952359Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.0953321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.0954307Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.0954715Z 2025-08-14T22:04:24.0954961Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.0957841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.0958642Z return mod(**inputs) 2025-08-14T22:04:24.0959468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.0960381Z outputs = self.model( 2025-08-14T22:04:24.0961180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.0962135Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.0962943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.0963927Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.0964851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.0965789Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.0966736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.0967623Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.0968076Z 2025-08-14T22:04:24.0968322Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.0969131Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.0969851Z return mod(**inputs) 2025-08-14T22:04:24.0970817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.0971316Z outputs = self.model( 2025-08-14T22:04:24.0971753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.0972237Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.0972676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.0973120Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.0973602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.0974113Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.0974615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.0975238Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.0975484Z 2025-08-14T22:04:24.0975592Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.0976038Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.0976568Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.0976981Z return mod(**inputs) 2025-08-14T22:04:24.0977435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.0977896Z outputs = self.model( 2025-08-14T22:04:24.0978345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.0978823Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.0979487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.0980289Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.0981256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.0982173Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.0983110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.0984013Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.0984351Z 2025-08-14T22:04:24.0984564Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.0989768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.0990534Z return mod(**inputs) 2025-08-14T22:04:24.0991371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.0992261Z outputs = self.model( 2025-08-14T22:04:24.0993105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.0994060Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.0994852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.0995697Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.0996614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.0997549Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.0998550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.0999626Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.0999965Z 2025-08-14T22:04:24.1000208Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1001064Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1001921Z return mod(**inputs) 2025-08-14T22:04:24.1002756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1003655Z outputs = self.model( 2025-08-14T22:04:24.1004495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1005364Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1006140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1007011Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1007911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1008844Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1009755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1010760Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1011165Z 2025-08-14T22:04:24.1011462Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1011909Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1012434Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1013264Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1018177Z return mod(**inputs) 2025-08-14T22:04:24.1018629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1019106Z outputs = self.model( 2025-08-14T22:04:24.1019553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1020072Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1020576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1021417Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1022314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1023305Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1024178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1024974Z return self.act(input) 2025-08-14T22:04:24.1025225Z 2025-08-14T22:04:24.1025406Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1025858Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1026319Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1026842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1027709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1028588Z return mod(**inputs) 2025-08-14T22:04:24.1029417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1030314Z outputs = self.model( 2025-08-14T22:04:24.1031160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1032083Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1032967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1033774Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1034688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1035633Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1036576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1037553Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1037956Z 2025-08-14T22:04:24.1038196Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1039007Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1039723Z return mod(**inputs) 2025-08-14T22:04:24.1040559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1041492Z outputs = self.model( 2025-08-14T22:04:24.1042334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1047300Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1048126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1049310Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1050288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1051250Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1052176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1053089Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1053415Z 2025-08-14T22:04:24.1053627Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1054470Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1055204Z return mod(**inputs) 2025-08-14T22:04:24.1056096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1056960Z outputs = self.model( 2025-08-14T22:04:24.1057913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1058825Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1059640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1060489Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1061409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1062385Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1063305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1064332Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1064871Z 2025-08-14T22:04:24.1065044Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1065565Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1066399Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1067143Z return mod(**inputs) 2025-08-14T22:04:24.1067936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1068801Z outputs = self.model( 2025-08-14T22:04:24.1069666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1070555Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1071354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1076244Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1077144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1078088Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1079032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1079910Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1080231Z 2025-08-14T22:04:24.1080462Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1081363Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1082104Z return mod(**inputs) 2025-08-14T22:04:24.1082937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1083807Z outputs = self.model( 2025-08-14T22:04:24.1084610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1085497Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1086488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1087345Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1088244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1089190Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1089975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1090749Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1091049Z 2025-08-14T22:04:24.1091239Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1091978Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1092603Z return mod(**inputs) 2025-08-14T22:04:24.1093271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1093977Z outputs = self.model( 2025-08-14T22:04:24.1094801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1095704Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1096514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1097390Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1098306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1099286Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1100266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1109972Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1110364Z 2025-08-14T22:04:24.1110544Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1110978Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1111470Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1112231Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1113005Z return mod(**inputs) 2025-08-14T22:04:24.1113786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1114603Z outputs = self.model( 2025-08-14T22:04:24.1117548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1118428Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1119204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1120035Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1120898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1121869Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1122628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1123401Z return self.act(input) 2025-08-14T22:04:24.1123631Z 2025-08-14T22:04:24.1123802Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1124250Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1124695Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1125163Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1125951Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1126673Z return mod(**inputs) 2025-08-14T22:04:24.1127560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1128438Z outputs = self.model( 2025-08-14T22:04:24.1129253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1130262Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1130718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1131164Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1131640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1132187Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1132687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1133222Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1133441Z 2025-08-14T22:04:24.1133571Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1134019Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1134457Z return mod(**inputs) 2025-08-14T22:04:24.1134909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1135376Z outputs = self.model( 2025-08-14T22:04:24.1135808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1136287Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1136721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1137200Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1137673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1138181Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1138788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1139733Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1140034Z 2025-08-14T22:04:24.1140250Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1141054Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1141781Z return mod(**inputs) 2025-08-14T22:04:24.1142561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1143408Z outputs = self.model( 2025-08-14T22:04:24.1144225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1149765Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1150509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1151282Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1152104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1153003Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1153905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1154900Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1155321Z 2025-08-14T22:04:24.1155490Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1155970Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1156842Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1157553Z return mod(**inputs) 2025-08-14T22:04:24.1158341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1159316Z outputs = self.model( 2025-08-14T22:04:24.1160089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1160979Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1161933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1162960Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1163943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1164856Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1165759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1166655Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1166965Z 2025-08-14T22:04:24.1167193Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1167982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1168763Z return mod(**inputs) 2025-08-14T22:04:24.1169599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1170577Z outputs = self.model( 2025-08-14T22:04:24.1171486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1172569Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1177564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1178208Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1178964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1179798Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1180763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1181568Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1181879Z 2025-08-14T22:04:24.1182091Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1182858Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1183593Z return mod(**inputs) 2025-08-14T22:04:24.1184341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1185172Z outputs = self.model( 2025-08-14T22:04:24.1185977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1186801Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1187582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1188570Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1189452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1190410Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1191363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1192369Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1192865Z 2025-08-14T22:04:24.1193042Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1193466Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1193989Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1194836Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1195571Z return mod(**inputs) 2025-08-14T22:04:24.1196400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1197277Z outputs = self.model( 2025-08-14T22:04:24.1198162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1199043Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1199861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1200685Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1201686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1206829Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1207728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1208552Z return self.act(input) 2025-08-14T22:04:24.1208801Z 2025-08-14T22:04:24.1208985Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1209452Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1209892Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1210393Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1211296Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1212027Z return mod(**inputs) 2025-08-14T22:04:24.1212863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1213756Z outputs = self.model( 2025-08-14T22:04:24.1214611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1215512Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1218559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1219421Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1220331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1221317Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1222290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1223275Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1223671Z 2025-08-14T22:04:24.1223926Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1224763Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1225531Z return mod(**inputs) 2025-08-14T22:04:24.1226394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1227270Z outputs = self.model( 2025-08-14T22:04:24.1228129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1229017Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1229901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1230598Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1240203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1241338Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1242284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1243185Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1243511Z 2025-08-14T22:04:24.1243736Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1244572Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1245342Z return mod(**inputs) 2025-08-14T22:04:24.1246370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1247269Z outputs = self.model( 2025-08-14T22:04:24.1248111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1249385Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1250215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1251046Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1251942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1252906Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1253858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1254913Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1255493Z 2025-08-14T22:04:24.1255680Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1256196Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1257036Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1257782Z return mod(**inputs) 2025-08-14T22:04:24.1258621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1259509Z outputs = self.model( 2025-08-14T22:04:24.1260611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1261193Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1261625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1262082Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1262565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1263068Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1263577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1264065Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1264244Z 2025-08-14T22:04:24.1264385Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1264852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1265605Z return mod(**inputs) 2025-08-14T22:04:24.1266395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1267270Z outputs = self.model( 2025-08-14T22:04:24.1268125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1269021Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1269965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1270816Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1271706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1272647Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1273597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1274564Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1281376Z 2025-08-14T22:04:24.1281644Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1282613Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1283408Z return mod(**inputs) 2025-08-14T22:04:24.1284235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1285148Z outputs = self.model( 2025-08-14T22:04:24.1285988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1286870Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1287676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1288538Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1289599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1290213Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1290805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1291356Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1291589Z 2025-08-14T22:04:24.1291695Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1291954Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1292243Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1292684Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1293121Z return mod(**inputs) 2025-08-14T22:04:24.1293573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1294092Z outputs = self.model( 2025-08-14T22:04:24.1294532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1295015Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1295450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1295906Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1296377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1296909Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1297398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1297818Z return self.act(input) 2025-08-14T22:04:24.1297968Z 2025-08-14T22:04:24.1298125Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1298566Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1299037Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1299527Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1300323Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1301097Z return mod(**inputs) 2025-08-14T22:04:24.1301950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1302841Z outputs = self.model( 2025-08-14T22:04:24.1303666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1308913Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1309741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1310614Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1311508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1312527Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1313496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1314489Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1314881Z 2025-08-14T22:04:24.1315127Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1315945Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1316699Z return mod(**inputs) 2025-08-14T22:04:24.1317534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1318571Z outputs = self.model( 2025-08-14T22:04:24.1319399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1320306Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1321312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1322137Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1323043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1324001Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1324947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1325962Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1326277Z 2025-08-14T22:04:24.1326507Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1327323Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1328060Z return mod(**inputs) 2025-08-14T22:04:24.1328901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1329807Z outputs = self.model( 2025-08-14T22:04:24.1330646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1331552Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1332385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1337356Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1338263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1339224Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1340192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1341238Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1341700Z 2025-08-14T22:04:24.1341883Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1342419Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1343328Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1344103Z return mod(**inputs) 2025-08-14T22:04:24.1344933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1345847Z outputs = self.model( 2025-08-14T22:04:24.1346696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1347732Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1348578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1349851Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1350766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1351710Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1352679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1353611Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1353958Z 2025-08-14T22:04:24.1354189Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1355021Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1355797Z return mod(**inputs) 2025-08-14T22:04:24.1356616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1357486Z outputs = self.model( 2025-08-14T22:04:24.1358320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1359319Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1360143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1360980Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1366082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1366903Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1367984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1368920Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1369280Z 2025-08-14T22:04:24.1369505Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1370338Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1371101Z return mod(**inputs) 2025-08-14T22:04:24.1371934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1372830Z outputs = self.model( 2025-08-14T22:04:24.1373664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1374571Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1375376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1376232Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1377273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1378256Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1379221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1380336Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1380779Z 2025-08-14T22:04:24.1380965Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1381438Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1381962Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1382813Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1383596Z return mod(**inputs) 2025-08-14T22:04:24.1384432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1385312Z outputs = self.model( 2025-08-14T22:04:24.1386224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1387147Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1387965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1388829Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1389755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1399302Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1400299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1401114Z return self.act(input) 2025-08-14T22:04:24.1401459Z 2025-08-14T22:04:24.1401645Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1402132Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1402586Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1403115Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1404074Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1404838Z return mod(**inputs) 2025-08-14T22:04:24.1407863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1408809Z outputs = self.model( 2025-08-14T22:04:24.1409670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1410674Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1411468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1412330Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1413237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1414216Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1415154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1416130Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1416524Z 2025-08-14T22:04:24.1416765Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1417606Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1418381Z return mod(**inputs) 2025-08-14T22:04:24.1419202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1420236Z outputs = self.model( 2025-08-14T22:04:24.1420855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1421334Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1421768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1422222Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1422766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1423280Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1423777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1424269Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1424494Z 2025-08-14T22:04:24.1424635Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1425078Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1425475Z return mod(**inputs) 2025-08-14T22:04:24.1425964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1426438Z outputs = self.model( 2025-08-14T22:04:24.1426878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1427362Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1427797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1428256Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1428848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1429790Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1430701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1431732Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1432253Z 2025-08-14T22:04:24.1432430Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1432951Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1433785Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1438895Z return mod(**inputs) 2025-08-14T22:04:24.1439750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1440734Z outputs = self.model( 2025-08-14T22:04:24.1441646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1442541Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1443364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1444226Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1445139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1446115Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1447092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1448032Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1448367Z 2025-08-14T22:04:24.1448605Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1449940Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1450717Z return mod(**inputs) 2025-08-14T22:04:24.1451545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1452437Z outputs = self.model( 2025-08-14T22:04:24.1453291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1454184Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1455111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1455982Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1456895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1457881Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1458826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1459804Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1460150Z 2025-08-14T22:04:24.1460472Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1461299Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1462050Z return mod(**inputs) 2025-08-14T22:04:24.1485363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1486301Z outputs = self.model( 2025-08-14T22:04:24.1487149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1488077Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1488912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1489765Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1490693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1491674Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1496917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1497965Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1498396Z 2025-08-14T22:04:24.1498577Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1499038Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1499579Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1500399Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1501286Z return mod(**inputs) 2025-08-14T22:04:24.1502158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1503030Z outputs = self.model( 2025-08-14T22:04:24.1503870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1504795Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1505613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1506477Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1507566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1508603Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1509509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1510331Z return self.act(input) 2025-08-14T22:04:24.1510595Z 2025-08-14T22:04:24.1510770Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1511239Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1511706Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1512246Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1513099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1513851Z return mod(**inputs) 2025-08-14T22:04:24.1514787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1515681Z outputs = self.model( 2025-08-14T22:04:24.1516549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1517460Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1518289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1519158Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1520146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1521127Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1526211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1527212Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1527603Z 2025-08-14T22:04:24.1527838Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1528692Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1529447Z return mod(**inputs) 2025-08-14T22:04:24.1530292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1531193Z outputs = self.model( 2025-08-14T22:04:24.1532046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1532939Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1533836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1534694Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1535619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1536723Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1537660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1538667Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1538990Z 2025-08-14T22:04:24.1539240Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1540086Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1540828Z return mod(**inputs) 2025-08-14T22:04:24.1541664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1542563Z outputs = self.model( 2025-08-14T22:04:24.1543394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1544307Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1545123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1545985Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1546905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1547866Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1549256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1558516Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1559007Z 2025-08-14T22:04:24.1559180Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1559841Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1560692Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1561545Z return mod(**inputs) 2025-08-14T22:04:24.1562402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1563309Z outputs = self.model( 2025-08-14T22:04:24.1564147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1567461Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1568397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1569263Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1570170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1571130Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1572094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1573021Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1573382Z 2025-08-14T22:04:24.1573628Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1574471Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1575231Z return mod(**inputs) 2025-08-14T22:04:24.1576073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1576967Z outputs = self.model( 2025-08-14T22:04:24.1577920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1578820Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1579777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1580256Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1580743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1581325Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1581847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1582354Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1582563Z 2025-08-14T22:04:24.1582697Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1583150Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1583552Z return mod(**inputs) 2025-08-14T22:04:24.1584048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1584514Z outputs = self.model( 2025-08-14T22:04:24.1584955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1585419Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1585855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1586306Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1586782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1587282Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1587780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1588536Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1588946Z 2025-08-14T22:04:24.1589119Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1589604Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1590105Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1590936Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1591684Z return mod(**inputs) 2025-08-14T22:04:24.1592493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1593369Z outputs = self.model( 2025-08-14T22:04:24.1598609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1599533Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1600363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1601305Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1602182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1603172Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1604092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1604895Z return self.act(input) 2025-08-14T22:04:24.1605159Z 2025-08-14T22:04:24.1605335Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1605807Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1606264Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1606839Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1607676Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1608574Z return mod(**inputs) 2025-08-14T22:04:24.1609402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1610292Z outputs = self.model( 2025-08-14T22:04:24.1611144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1612149Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1612939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1613781Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1614692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1615649Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1616636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1617609Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1618000Z 2025-08-14T22:04:24.1618246Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1619074Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1619830Z return mod(**inputs) 2025-08-14T22:04:24.1620672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1621577Z outputs = self.model( 2025-08-14T22:04:24.1622407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1627400Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1628217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1629083Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1630080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1631046Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1631997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1632928Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1633252Z 2025-08-14T22:04:24.1633486Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1634312Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1635126Z return mod(**inputs) 2025-08-14T22:04:24.1635970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1636874Z outputs = self.model( 2025-08-14T22:04:24.1637860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1638779Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1639606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1640458Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1641446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1642442Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1643393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1644525Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1644980Z 2025-08-14T22:04:24.1645154Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1645694Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1646523Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1647250Z return mod(**inputs) 2025-08-14T22:04:24.1648082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1649744Z outputs = self.model( 2025-08-14T22:04:24.1650416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1651051Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1651513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1652046Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1652586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1653100Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1653612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1654104Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1654282Z 2025-08-14T22:04:24.1654420Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1654873Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1655278Z return mod(**inputs) 2025-08-14T22:04:24.1655716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1656194Z outputs = self.model( 2025-08-14T22:04:24.1656639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1657114Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1657688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1658152Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1658633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1659136Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1659644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1660154Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1660347Z 2025-08-14T22:04:24.1660519Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1660967Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1661375Z return mod(**inputs) 2025-08-14T22:04:24.1661821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1662300Z outputs = self.model( 2025-08-14T22:04:24.1662732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1663207Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1663646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1664085Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1664559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1665071Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1665611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1666150Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1670749Z 2025-08-14T22:04:24.1670905Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1671457Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1671734Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1672182Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1672631Z return mod(**inputs) 2025-08-14T22:04:24.1673076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1673543Z outputs = self.model( 2025-08-14T22:04:24.1673989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1674467Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1674889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1675337Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1675812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1676343Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1676827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1677252Z return self.act(input) 2025-08-14T22:04:24.1677390Z 2025-08-14T22:04:24.1677491Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1677745Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1677991Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1678272Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1678714Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1679137Z return mod(**inputs) 2025-08-14T22:04:24.1679578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1680046Z outputs = self.model( 2025-08-14T22:04:24.1680481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1681040Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1681564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1682014Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1682509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1683027Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1683540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1684066Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1684276Z 2025-08-14T22:04:24.1684406Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1684850Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1685253Z return mod(**inputs) 2025-08-14T22:04:24.1685683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1686153Z outputs = self.model( 2025-08-14T22:04:24.1686596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1687100Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1687523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1687986Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1688465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1688977Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1689470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1689979Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1690151Z 2025-08-14T22:04:24.1690288Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1690730Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1691133Z return mod(**inputs) 2025-08-14T22:04:24.1691578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1692043Z outputs = self.model( 2025-08-14T22:04:24.1692474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1692947Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1693384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1693827Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1694303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1694808Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1703707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1704475Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1704801Z 2025-08-14T22:04:24.1704911Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1705304Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1705874Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1706404Z return mod(**inputs) 2025-08-14T22:04:24.1706983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1707469Z outputs = self.model( 2025-08-14T22:04:24.1707904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1708382Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1708834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1709287Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1711909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1712429Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1712927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1713407Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1713592Z 2025-08-14T22:04:24.1713724Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1714174Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1714574Z return mod(**inputs) 2025-08-14T22:04:24.1715008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1715476Z outputs = self.model( 2025-08-14T22:04:24.1715949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1716414Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1716853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1717308Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1717786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1718308Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1718808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1719312Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1719508Z 2025-08-14T22:04:24.1719646Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1720088Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1720503Z return mod(**inputs) 2025-08-14T22:04:24.1720947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1721506Z outputs = self.model( 2025-08-14T22:04:24.1721947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1722429Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1722863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1723306Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1723796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1724386Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1724944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1725515Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1725750Z 2025-08-14T22:04:24.1725856Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1726126Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1726405Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1726852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1727263Z return mod(**inputs) 2025-08-14T22:04:24.1727696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1728164Z outputs = self.model( 2025-08-14T22:04:24.1728623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1729096Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1729523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1729980Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1730454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1730984Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1731467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1731890Z return self.act(input) 2025-08-14T22:04:24.1732027Z 2025-08-14T22:04:24.1732131Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1732378Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1732633Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1732940Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1733379Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1733780Z return mod(**inputs) 2025-08-14T22:04:24.1734219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1734689Z outputs = self.model( 2025-08-14T22:04:24.1735125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1735624Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1736060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1736511Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1736985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1737489Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1737991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1738508Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1742975Z 2025-08-14T22:04:24.1743114Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1743565Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1743980Z return mod(**inputs) 2025-08-14T22:04:24.1744411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1744884Z outputs = self.model( 2025-08-14T22:04:24.1745329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1745855Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1746291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1746768Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1747250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1747755Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1748263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1749141Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1749320Z 2025-08-14T22:04:24.1749463Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1749909Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1750382Z return mod(**inputs) 2025-08-14T22:04:24.1750829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1751297Z outputs = self.model( 2025-08-14T22:04:24.1751739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1752213Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1752643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1753092Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1753663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1754210Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1754704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1755288Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1755539Z 2025-08-14T22:04:24.1755641Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1755928Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1756363Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1756773Z return mod(**inputs) 2025-08-14T22:04:24.1757210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1757710Z outputs = self.model( 2025-08-14T22:04:24.1758147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1758617Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1759045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1759488Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1759962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1760469Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1760974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1761542Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1761729Z 2025-08-14T22:04:24.1761863Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1762309Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1762709Z return mod(**inputs) 2025-08-14T22:04:24.1763154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1763625Z outputs = self.model( 2025-08-14T22:04:24.1764067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1764572Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1765009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1765455Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1765932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1766437Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1766932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1767435Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1767625Z 2025-08-14T22:04:24.1771965Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1772485Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1772894Z return mod(**inputs) 2025-08-14T22:04:24.1773334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1773802Z outputs = self.model( 2025-08-14T22:04:24.1774239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1774717Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1775138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1775587Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1776067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1776597Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1777089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1777634Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1777867Z 2025-08-14T22:04:24.1777975Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1778222Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1778507Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1778989Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1779389Z return mod(**inputs) 2025-08-14T22:04:24.1779822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1780296Z outputs = self.model( 2025-08-14T22:04:24.1780737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1781206Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1781642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1782091Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1782651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1783220Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1783710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1784130Z return self.act(input) 2025-08-14T22:04:24.1784267Z 2025-08-14T22:04:24.1784370Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1784613Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1784862Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1785146Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1785580Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1786013Z return mod(**inputs) 2025-08-14T22:04:24.1786459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1786922Z outputs = self.model( 2025-08-14T22:04:24.1787366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1787843Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1788278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1788719Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1789225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1789740Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1790245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1790764Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1790977Z 2025-08-14T22:04:24.1791110Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1791552Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1791947Z return mod(**inputs) 2025-08-14T22:04:24.1792387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1792857Z outputs = self.model( 2025-08-14T22:04:24.1793298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1793788Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1794225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1794674Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1795148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1795664Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1796171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1796690Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1801097Z 2025-08-14T22:04:24.1801296Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1801783Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1802202Z return mod(**inputs) 2025-08-14T22:04:24.1802651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1803120Z outputs = self.model( 2025-08-14T22:04:24.1803560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1804032Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1804457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1804907Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1805384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1805886Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1806377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1806923Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1807165Z 2025-08-14T22:04:24.1807274Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1807586Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1808035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1808434Z return mod(**inputs) 2025-08-14T22:04:24.1808869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1809334Z outputs = self.model( 2025-08-14T22:04:24.1809771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1810241Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1810686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1811137Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1811720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1812244Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1812736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1813224Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1813239Z 2025-08-14T22:04:24.1813375Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1813628Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1813709Z return mod(**inputs) 2025-08-14T22:04:24.1814032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1814144Z outputs = self.model( 2025-08-14T22:04:24.1814459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1814560Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1814842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1814947Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1815259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1815407Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1815722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1815845Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1815857Z 2025-08-14T22:04:24.1815994Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1816250Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1816332Z return mod(**inputs) 2025-08-14T22:04:24.1816654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1816741Z outputs = self.model( 2025-08-14T22:04:24.1817053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1817150Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1817431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1817539Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1817847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1817970Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1818288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1818467Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1818480Z 2025-08-14T22:04:24.1818583Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1818685Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1818814Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1819071Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1819154Z return mod(**inputs) 2025-08-14T22:04:24.1819471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1819568Z outputs = self.model( 2025-08-14T22:04:24.1819899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1819992Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1820283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1820381Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1820701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1820852Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1821125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1821222Z return self.act(input) 2025-08-14T22:04:24.1821235Z 2025-08-14T22:04:24.1821333Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1821439Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1821534Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1821685Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1821941Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1822022Z return mod(**inputs) 2025-08-14T22:04:24.1822335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1822426Z outputs = self.model( 2025-08-14T22:04:24.1822736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1822864Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1823146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1823245Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1823566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1823689Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1824001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1824144Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1824158Z 2025-08-14T22:04:24.1824282Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1824537Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1824621Z return mod(**inputs) 2025-08-14T22:04:24.1824931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1825021Z outputs = self.model( 2025-08-14T22:04:24.1825333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1825424Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1825707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1830093Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1830466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1830587Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1830897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1831006Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1831018Z 2025-08-14T22:04:24.1831145Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1831399Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1831508Z return mod(**inputs) 2025-08-14T22:04:24.1831817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1831905Z outputs = self.model( 2025-08-14T22:04:24.1832216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1832307Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1832591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1832689Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1833008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1833127Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1833438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1833646Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1833659Z 2025-08-14T22:04:24.1833755Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1833891Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1834142Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1834223Z return mod(**inputs) 2025-08-14T22:04:24.1834538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1834643Z outputs = self.model( 2025-08-14T22:04:24.1834950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1835046Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1835328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1835436Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1835748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1835867Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1836186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1836294Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1836308Z 2025-08-14T22:04:24.1836447Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1836698Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1836779Z return mod(**inputs) 2025-08-14T22:04:24.1837100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1837185Z outputs = self.model( 2025-08-14T22:04:24.1837494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1837612Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1837892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1838001Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1838309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1838431Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1838747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1838863Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1838876Z 2025-08-14T22:04:24.1839023Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1839281Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1839361Z return mod(**inputs) 2025-08-14T22:04:24.1839681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1839763Z outputs = self.model( 2025-08-14T22:04:24.1840075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1840174Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1840528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1840636Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1840998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1841141Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1841549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1841709Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1841722Z 2025-08-14T22:04:24.1841819Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1841932Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1842062Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1842360Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1842443Z return mod(**inputs) 2025-08-14T22:04:24.1842756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1842853Z outputs = self.model( 2025-08-14T22:04:24.1843167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1843267Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1843559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1843658Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1843983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1844131Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1844402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1844498Z return self.act(input) 2025-08-14T22:04:24.1844511Z 2025-08-14T22:04:24.1844607Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1844701Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1844806Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1844933Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1845191Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1845325Z return mod(**inputs) 2025-08-14T22:04:24.1845633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1845726Z outputs = self.model( 2025-08-14T22:04:24.1846033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1846124Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1846408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1846505Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1846842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1846966Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1847277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1847419Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1847432Z 2025-08-14T22:04:24.1847557Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1847809Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1847892Z return mod(**inputs) 2025-08-14T22:04:24.1848201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1848292Z outputs = self.model( 2025-08-14T22:04:24.1848609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1849062Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1849360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1849459Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1849771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1849890Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1850199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1850357Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1850370Z 2025-08-14T22:04:24.1850496Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1850750Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1850832Z return mod(**inputs) 2025-08-14T22:04:24.1851140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1851237Z outputs = self.model( 2025-08-14T22:04:24.1851543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1851632Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1851914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1852012Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1852328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1852451Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1852760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1852935Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1852948Z 2025-08-14T22:04:24.1853076Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1853210Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1853464Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1853550Z return mod(**inputs) 2025-08-14T22:04:24.1853869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1853956Z outputs = self.model( 2025-08-14T22:04:24.1854263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1854363Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1854670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1863063Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1863524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1863660Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1864085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1864207Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1864225Z 2025-08-14T22:04:24.1864375Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1864710Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1864796Z return mod(**inputs) 2025-08-14T22:04:24.1865229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1865362Z outputs = self.model( 2025-08-14T22:04:24.1865783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1865890Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1866233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1866330Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1866647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1866788Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1867104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1867220Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1867235Z 2025-08-14T22:04:24.1867360Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1867614Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1867697Z return mod(**inputs) 2025-08-14T22:04:24.1868012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1868093Z outputs = self.model( 2025-08-14T22:04:24.1868401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1868506Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1868783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1868883Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1869201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1871518Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1871913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1872074Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1872087Z 2025-08-14T22:04:24.1872183Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1872289Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1872413Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1872669Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1872750Z return mod(**inputs) 2025-08-14T22:04:24.1873058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1873149Z outputs = self.model( 2025-08-14T22:04:24.1873496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1873586Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1873874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1873971Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1874289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1874436Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1874705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1874802Z return self.act(input) 2025-08-14T22:04:24.1874815Z 2025-08-14T22:04:24.1874910Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1875005Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1875129Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1875253Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1875511Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1875592Z return mod(**inputs) 2025-08-14T22:04:24.1875901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1875993Z outputs = self.model( 2025-08-14T22:04:24.1876305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1876415Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1876700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1876797Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1877115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1877236Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1877544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1877686Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1877698Z 2025-08-14T22:04:24.1877823Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1878078Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1878160Z return mod(**inputs) 2025-08-14T22:04:24.1878466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1878556Z outputs = self.model( 2025-08-14T22:04:24.1878868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1878959Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1879268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1879370Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1879682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1879801Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1880108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1880218Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1880231Z 2025-08-14T22:04:24.1880357Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1880631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1880714Z return mod(**inputs) 2025-08-14T22:04:24.1881025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1881119Z outputs = self.model( 2025-08-14T22:04:24.1881517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1881608Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1881897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1881997Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1882316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1882439Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1882747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1882945Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1882957Z 2025-08-14T22:04:24.1883056Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1883180Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1883441Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1883523Z return mod(**inputs) 2025-08-14T22:04:24.1883926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1884054Z outputs = self.model( 2025-08-14T22:04:24.1884420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1884521Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1884801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1884901Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1885221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1885346Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1885669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1885777Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1885791Z 2025-08-14T22:04:24.1885918Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1886172Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1886254Z return mod(**inputs) 2025-08-14T22:04:24.1886576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1886664Z outputs = self.model( 2025-08-14T22:04:24.1886971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1887089Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1887369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1887466Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1887790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1887911Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1888229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1888348Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1888381Z 2025-08-14T22:04:24.1888514Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1888774Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1888857Z return mod(**inputs) 2025-08-14T22:04:24.1889173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1889259Z outputs = self.model( 2025-08-14T22:04:24.1889572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1889670Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1889950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1890047Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1890368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1890539Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1890853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1891010Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1891023Z 2025-08-14T22:04:24.1891120Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1891224Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1891351Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1891622Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1891712Z return mod(**inputs) 2025-08-14T22:04:24.1892021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1892118Z outputs = self.model( 2025-08-14T22:04:24.1892434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1892934Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1893362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1893814Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1894284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1894819Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1895295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1895715Z return self.act(input) 2025-08-14T22:04:24.1895858Z 2025-08-14T22:04:24.1895960Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1896211Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1896629Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1896927Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1897403Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1897809Z return mod(**inputs) 2025-08-14T22:04:24.1898321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1898849Z outputs = self.model( 2025-08-14T22:04:24.1899293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1899769Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1900240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1900692Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1901200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1901715Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1902217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1902739Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1903006Z 2025-08-14T22:04:24.1903136Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1903575Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1903988Z return mod(**inputs) 2025-08-14T22:04:24.1904429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1904892Z outputs = self.model( 2025-08-14T22:04:24.1905344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1905847Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1906274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1906725Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1907205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1907715Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1908229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1908715Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1908886Z 2025-08-14T22:04:24.1909024Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1909474Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1909872Z return mod(**inputs) 2025-08-14T22:04:24.1910311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1910780Z outputs = self.model( 2025-08-14T22:04:24.1911213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1911689Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1912118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1912565Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1914760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1915272Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1915782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1916329Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1925947Z 2025-08-14T22:04:24.1926213Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1926533Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1927013Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1927607Z return mod(**inputs) 2025-08-14T22:04:24.1928150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1928640Z outputs = self.model( 2025-08-14T22:04:24.1929097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1929577Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1930053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1930521Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1931014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1931528Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1932095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1932590Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1932774Z 2025-08-14T22:04:24.1932915Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1933363Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1933773Z return mod(**inputs) 2025-08-14T22:04:24.1934226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1934734Z outputs = self.model( 2025-08-14T22:04:24.1935173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1935652Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1936090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1936552Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1937026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1937563Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1938068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1938565Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1938767Z 2025-08-14T22:04:24.1938901Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1939353Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1939759Z return mod(**inputs) 2025-08-14T22:04:24.1940198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1940679Z outputs = self.model( 2025-08-14T22:04:24.1941130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1941600Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1946682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1947144Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1947629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1948136Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1949032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.1949620Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.1949854Z 2025-08-14T22:04:24.1949970Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1950223Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1950517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1950970Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1951370Z return mod(**inputs) 2025-08-14T22:04:24.1951817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1952346Z outputs = self.model( 2025-08-14T22:04:24.1952795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1953261Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1953702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1954153Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1954626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.1955165Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.1955660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.1956087Z return self.act(input) 2025-08-14T22:04:24.1962438Z 2025-08-14T22:04:24.1962545Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1962810Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1963110Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1963394Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1963848Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1964258Z return mod(**inputs) 2025-08-14T22:04:24.1964702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1965167Z outputs = self.model( 2025-08-14T22:04:24.1965614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1966124Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1966549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1966998Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1967482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1967996Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1968489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.1969013Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.1969225Z 2025-08-14T22:04:24.1969367Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1969802Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1970212Z return mod(**inputs) 2025-08-14T22:04:24.1970653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1971260Z outputs = self.model( 2025-08-14T22:04:24.1971699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1972186Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1972651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1973105Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1973576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1974084Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1974594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.1975072Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.1975255Z 2025-08-14T22:04:24.1975386Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1975855Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1976275Z return mod(**inputs) 2025-08-14T22:04:24.1976708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1977182Z outputs = self.model( 2025-08-14T22:04:24.1977625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1978088Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1978522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1978975Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1979443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1979949Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1980451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.1981015Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.1981261Z 2025-08-14T22:04:24.1981364Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.1981650Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1982094Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1982485Z return mod(**inputs) 2025-08-14T22:04:24.1982919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1983411Z outputs = self.model( 2025-08-14T22:04:24.1983841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1984308Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1984735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1985187Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1989975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1990490Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1990987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.1991474Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.1991664Z 2025-08-14T22:04:24.1991793Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1992235Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1992633Z return mod(**inputs) 2025-08-14T22:04:24.1993066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1993535Z outputs = self.model( 2025-08-14T22:04:24.1994000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.1994470Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.1994893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.1995336Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.1995810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.1996311Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.1996810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.1997337Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.1997527Z 2025-08-14T22:04:24.1997661Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.1998095Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.1998504Z return mod(**inputs) 2025-08-14T22:04:24.1998947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.1999404Z outputs = self.model( 2025-08-14T22:04:24.1999916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2000451Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2000881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2001387Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2001879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2002415Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2002913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.2003444Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.2003680Z 2025-08-14T22:04:24.2003781Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2004039Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2004361Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2004801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2005202Z return mod(**inputs) 2025-08-14T22:04:24.2005639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2006101Z outputs = self.model( 2025-08-14T22:04:24.2006542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2007017Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2007441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2007895Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2008371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.2008900Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.2009381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.2009803Z return self.act(input) 2025-08-14T22:04:24.2009940Z 2025-08-14T22:04:24.2010045Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2010294Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2010545Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2010829Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2011653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2012047Z return mod(**inputs) 2025-08-14T22:04:24.2012506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2012972Z outputs = self.model( 2025-08-14T22:04:24.2013423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2013900Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2018602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2019090Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2019603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2020114Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2020609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.2021119Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.2021334Z 2025-08-14T22:04:24.2021466Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2021908Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2022300Z return mod(**inputs) 2025-08-14T22:04:24.2022742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2023261Z outputs = self.model( 2025-08-14T22:04:24.2023704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2024201Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2024628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2025078Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2025542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2026045Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2026569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.2027049Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.2027218Z 2025-08-14T22:04:24.2027345Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2027793Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2028192Z return mod(**inputs) 2025-08-14T22:04:24.2028634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2029193Z outputs = self.model( 2025-08-14T22:04:24.2029646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2030116Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2030540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2030987Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2031517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2032019Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2032511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.2033066Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.2033305Z 2025-08-14T22:04:24.2033444Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2033723Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2034168Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2034570Z return mod(**inputs) 2025-08-14T22:04:24.2035017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2035477Z outputs = self.model( 2025-08-14T22:04:24.2035912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2036392Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2036855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2037311Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2037794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2038303Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2038793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.2039291Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.2039468Z 2025-08-14T22:04:24.2039603Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2040047Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2040438Z return mod(**inputs) 2025-08-14T22:04:24.2040881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2041463Z outputs = self.model( 2025-08-14T22:04:24.2041893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2042370Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2042800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2047476Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2048003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2048505Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2049364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.2049870Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.2050070Z 2025-08-14T22:04:24.2050197Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2050637Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2051045Z return mod(**inputs) 2025-08-14T22:04:24.2051481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2051952Z outputs = self.model( 2025-08-14T22:04:24.2052393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2052872Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2053290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2053734Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2054210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2054715Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2055283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.2055827Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.2056057Z 2025-08-14T22:04:24.2056167Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2056420Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2056708Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2057151Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2057554Z return mod(**inputs) 2025-08-14T22:04:24.2058075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2058654Z outputs = self.model( 2025-08-14T22:04:24.2059104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2059573Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2060014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2060463Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2060934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.2061469Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.2061954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.2062429Z return self.act(input) 2025-08-14T22:04:24.2062567Z 2025-08-14T22:04:24.2062666Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2062929Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2063220Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2063497Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2063949Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2064350Z return mod(**inputs) 2025-08-14T22:04:24.2064789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2065250Z outputs = self.model( 2025-08-14T22:04:24.2065727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2066207Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2066633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2067075Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2067553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2068057Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2068546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.2069062Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.2069267Z 2025-08-14T22:04:24.2069400Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2069847Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2070242Z return mod(**inputs) 2025-08-14T22:04:24.2070678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2071144Z outputs = self.model( 2025-08-14T22:04:24.2071577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2072056Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2076738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2077196Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2077663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2078178Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2078680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.2079155Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.2079339Z 2025-08-14T22:04:24.2079470Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2079962Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2080366Z return mod(**inputs) 2025-08-14T22:04:24.2080798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2081350Z outputs = self.model( 2025-08-14T22:04:24.2081797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2082266Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2082698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2083151Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2083628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2084133Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2084641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.2085244Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.2085484Z 2025-08-14T22:04:24.2085595Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2085882Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2086326Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2086812Z return mod(**inputs) 2025-08-14T22:04:24.2087345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2087815Z outputs = self.model( 2025-08-14T22:04:24.2088261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2088744Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2089181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2089629Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2090106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2090611Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2091110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.2091663Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.2091842Z 2025-08-14T22:04:24.2091983Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2092424Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2092825Z return mod(**inputs) 2025-08-14T22:04:24.2093271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2093744Z outputs = self.model( 2025-08-14T22:04:24.2094226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2094700Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2095131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2095572Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2096055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2096565Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2097060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.2097585Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.2097784Z 2025-08-14T22:04:24.2097914Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2098361Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2098766Z return mod(**inputs) 2025-08-14T22:04:24.2099192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2099658Z outputs = self.model( 2025-08-14T22:04:24.2100107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2100577Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2101005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2109960Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2110607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2111305Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2111975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.2112700Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.2112932Z 2025-08-14T22:04:24.2113035Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2113293Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2113607Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2114051Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2114454Z return mod(**inputs) 2025-08-14T22:04:24.2114907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2115382Z outputs = self.model( 2025-08-14T22:04:24.2117941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2118434Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2118868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2119319Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2119788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.2120374Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.2120874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.2121383Z return self.act(input) 2025-08-14T22:04:24.2121534Z 2025-08-14T22:04:24.2121634Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2121891Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2122141Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2122413Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2122896Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2123301Z return mod(**inputs) 2025-08-14T22:04:24.2123734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2124202Z outputs = self.model( 2025-08-14T22:04:24.2124639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2125114Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2125538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2126009Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2126493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2126991Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2127493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.2128014Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.2128223Z 2025-08-14T22:04:24.2128357Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2128795Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2129191Z return mod(**inputs) 2025-08-14T22:04:24.2129624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2130094Z outputs = self.model( 2025-08-14T22:04:24.2130603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2131168Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2131599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2132044Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2132518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2133022Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2133569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.2134047Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.2134228Z 2025-08-14T22:04:24.2134355Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2134798Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2135196Z return mod(**inputs) 2025-08-14T22:04:24.2135635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2136103Z outputs = self.model( 2025-08-14T22:04:24.2136543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2137010Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2137440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2137895Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2138371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2138871Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2139371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.2139920Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.2140184Z 2025-08-14T22:04:24.2140284Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2140581Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2141030Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2141427Z return mod(**inputs) 2025-08-14T22:04:24.2141866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2142334Z outputs = self.model( 2025-08-14T22:04:24.2142775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2143268Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2143705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2144152Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2144630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2149463Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2149961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.2150453Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.2150631Z 2025-08-14T22:04:24.2150771Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2151206Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2151612Z return mod(**inputs) 2025-08-14T22:04:24.2152050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2152574Z outputs = self.model( 2025-08-14T22:04:24.2153009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2153500Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2154067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2154509Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2155027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2155525Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2156014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.2156516Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.2156711Z 2025-08-14T22:04:24.2156839Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2157279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2157674Z return mod(**inputs) 2025-08-14T22:04:24.2158108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2158579Z outputs = self.model( 2025-08-14T22:04:24.2159010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2159567Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2160048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2160496Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2160965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2161559Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2162093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.2162641Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.2162869Z 2025-08-14T22:04:24.2162970Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2163229Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2163517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2163955Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2164355Z return mod(**inputs) 2025-08-14T22:04:24.2164847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2165316Z outputs = self.model( 2025-08-14T22:04:24.2165756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2166236Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2166667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2167106Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2167589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.2168125Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.2168609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.2169028Z return self.act(input) 2025-08-14T22:04:24.2169174Z 2025-08-14T22:04:24.2169276Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2169558Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2169799Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2170082Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2170528Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2170933Z return mod(**inputs) 2025-08-14T22:04:24.2171364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2171827Z outputs = self.model( 2025-08-14T22:04:24.2172294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2172384Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2172666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2172777Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2173088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2173218Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2173531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.2173667Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.2173681Z 2025-08-14T22:04:24.2178216Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2178472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2178564Z return mod(**inputs) 2025-08-14T22:04:24.2178874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2178962Z outputs = self.model( 2025-08-14T22:04:24.2179277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2179371Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2179682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2179793Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2180102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2180232Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2180547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.2180648Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.2180660Z 2025-08-14T22:04:24.2180796Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2181071Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2181160Z return mod(**inputs) 2025-08-14T22:04:24.2181470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2181553Z outputs = self.model( 2025-08-14T22:04:24.2181865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2181954Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2182234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2182338Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2182647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2182777Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2183110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.2183275Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.2183290Z 2025-08-14T22:04:24.2183395Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2183523Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2183770Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2183859Z return mod(**inputs) 2025-08-14T22:04:24.2184192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2184284Z outputs = self.model( 2025-08-14T22:04:24.2184591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2184683Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2184970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2185072Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2185387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2185508Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2185814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.2185928Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.2185940Z 2025-08-14T22:04:24.2186065Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2186315Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2186408Z return mod(**inputs) 2025-08-14T22:04:24.2186717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2186810Z outputs = self.model( 2025-08-14T22:04:24.2187141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2187231Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2187515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2187610Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2187918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2188046Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2188422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.2188569Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.2188585Z 2025-08-14T22:04:24.2188758Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2189009Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2189096Z return mod(**inputs) 2025-08-14T22:04:24.2189405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2189499Z outputs = self.model( 2025-08-14T22:04:24.2189805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2189897Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2190184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2190280Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2190590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2190738Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2191048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.2191211Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.2191224Z 2025-08-14T22:04:24.2191322Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2191418Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2191623Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2191867Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2191958Z return mod(**inputs) 2025-08-14T22:04:24.2192269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2192355Z outputs = self.model( 2025-08-14T22:04:24.2192670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2192762Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2193036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2193142Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2193446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.2193599Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.2193868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.2193952Z return self.act(input) 2025-08-14T22:04:24.2193964Z 2025-08-14T22:04:24.2194071Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2194171Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2194262Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2194398Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2194664Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2194754Z return mod(**inputs) 2025-08-14T22:04:24.2195119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2195204Z outputs = self.model( 2025-08-14T22:04:24.2195518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2195609Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2195885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2196014Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2196328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2196458Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2196772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.2196906Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.2196919Z 2025-08-14T22:04:24.2197053Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2197302Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2197392Z return mod(**inputs) 2025-08-14T22:04:24.2197699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2197783Z outputs = self.model( 2025-08-14T22:04:24.2198120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2198209Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2198487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2198592Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2198900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2199060Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2199365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.2199468Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.2199482Z 2025-08-14T22:04:24.2199615Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2199862Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2199945Z return mod(**inputs) 2025-08-14T22:04:24.2200261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2200345Z outputs = self.model( 2025-08-14T22:04:24.2200659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2200750Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2201029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2201133Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2201516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2201650Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2201961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.2202162Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.2202175Z 2025-08-14T22:04:24.2202281Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2202409Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2202658Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2206981Z return mod(**inputs) 2025-08-14T22:04:24.2207296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2207387Z outputs = self.model( 2025-08-14T22:04:24.2207700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2207819Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2208113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2208214Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2208525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2208652Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2208962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.2209078Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.2209091Z 2025-08-14T22:04:24.2209217Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2209468Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2209556Z return mod(**inputs) 2025-08-14T22:04:24.2209866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2209983Z outputs = self.model( 2025-08-14T22:04:24.2210293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2210382Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2210664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2210760Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2211091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2211218Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2211523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.2211652Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.2211665Z 2025-08-14T22:04:24.2211789Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2212037Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2212138Z return mod(**inputs) 2025-08-14T22:04:24.2212447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2212537Z outputs = self.model( 2025-08-14T22:04:24.2212846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2212937Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2213216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2213312Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2213618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2213743Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2214078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.2214241Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.2214254Z 2025-08-14T22:04:24.2214350Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2214445Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2214579Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2214824Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2214905Z return mod(**inputs) 2025-08-14T22:04:24.2215244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2215329Z outputs = self.model( 2025-08-14T22:04:24.2215643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2215733Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2216008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2216110Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2216415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.2216573Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.2216838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.2216923Z return self.act(input) 2025-08-14T22:04:24.2216936Z 2025-08-14T22:04:24.2217042Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2217162Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2217326Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2217458Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2217750Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2217843Z return mod(**inputs) 2025-08-14T22:04:24.2218152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2218257Z outputs = self.model( 2025-08-14T22:04:24.2218572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2218663Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2218940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2219046Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2219355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2219481Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2219786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.2219925Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.2219938Z 2025-08-14T22:04:24.2220068Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2220316Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2220396Z return mod(**inputs) 2025-08-14T22:04:24.2220708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2220791Z outputs = self.model( 2025-08-14T22:04:24.2221107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2221200Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2221496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2221598Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2221958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2222087Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2222392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.2222491Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.2222503Z 2025-08-14T22:04:24.2222662Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2222910Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2222990Z return mod(**inputs) 2025-08-14T22:04:24.2223305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2223389Z outputs = self.model( 2025-08-14T22:04:24.2223704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2223795Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2224074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2224181Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2224488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2224610Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2224950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.2225117Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.2225130Z 2025-08-14T22:04:24.2225236Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2225363Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2225609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2225718Z return mod(**inputs) 2025-08-14T22:04:24.2226030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2226128Z outputs = self.model( 2025-08-14T22:04:24.2226434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2226525Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2226813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2226910Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2227223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2227348Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2227655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.2227771Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.2227784Z 2025-08-14T22:04:24.2227910Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2228155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2228247Z return mod(**inputs) 2025-08-14T22:04:24.2228556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2228647Z outputs = self.model( 2025-08-14T22:04:24.2228973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2229064Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2229347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2229442Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2229751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2229876Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2230210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.2230338Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.2230351Z 2025-08-14T22:04:24.2230479Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2230730Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2230820Z return mod(**inputs) 2025-08-14T22:04:24.2231127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2231209Z outputs = self.model( 2025-08-14T22:04:24.2231529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2231622Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2240397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2240510Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2240962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2241108Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2241515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.2241682Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.2241695Z 2025-08-14T22:04:24.2241791Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2241919Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2242051Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2242298Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2242378Z return mod(**inputs) 2025-08-14T22:04:24.2242693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2242777Z outputs = self.model( 2025-08-14T22:04:24.2243091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2243183Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2243457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2243560Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2243869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.2244017Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.2244292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.2244377Z return self.act(input) 2025-08-14T22:04:24.2244390Z 2025-08-14T22:04:24.2244496Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2244592Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2244685Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2244842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2245088Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2245169Z return mod(**inputs) 2025-08-14T22:04:24.2245490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2245576Z outputs = self.model( 2025-08-14T22:04:24.2245889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2245977Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2246320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2246448Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2246808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2246934Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2247242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.2247375Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.2247388Z 2025-08-14T22:04:24.2247523Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2247769Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2247851Z return mod(**inputs) 2025-08-14T22:04:24.2248169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2248256Z outputs = self.model( 2025-08-14T22:04:24.2248593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2248932Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2249219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2249324Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2249636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2249781Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2250099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.2250200Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.2250213Z 2025-08-14T22:04:24.2250349Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2250602Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2250689Z return mod(**inputs) 2025-08-14T22:04:24.2251160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2251247Z outputs = self.model( 2025-08-14T22:04:24.2251565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2251655Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2251939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2252045Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2252355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2252475Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2252791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.2252980Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.2252994Z 2025-08-14T22:04:24.2253099Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2253225Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2253473Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2253564Z return mod(**inputs) 2025-08-14T22:04:24.2253877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2253970Z outputs = self.model( 2025-08-14T22:04:24.2254303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2254398Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2254688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2254787Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2255097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2255221Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2255535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.2255651Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.2255664Z 2025-08-14T22:04:24.2255788Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2256033Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2256123Z return mod(**inputs) 2025-08-14T22:04:24.2256506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2256593Z outputs = self.model( 2025-08-14T22:04:24.2256907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2256999Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2257283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2257409Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2257716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2257843Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2258151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.2258276Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.2258288Z 2025-08-14T22:04:24.2258411Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2258663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2258756Z return mod(**inputs) 2025-08-14T22:04:24.2259068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2259158Z outputs = self.model( 2025-08-14T22:04:24.2259481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2259572Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2259856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2259953Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2260262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2260390Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2260807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.2260979Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.2260992Z 2025-08-14T22:04:24.2261091Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2261225Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2261370Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2261616Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2261698Z return mod(**inputs) 2025-08-14T22:04:24.2262057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2262144Z outputs = self.model( 2025-08-14T22:04:24.2262466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2262559Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2262836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2262939Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2263250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.2263397Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.2263672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.2263759Z return self.act(input) 2025-08-14T22:04:24.2263771Z 2025-08-14T22:04:24.2263901Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2263995Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2264087Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2264218Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2264463Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2264543Z return mod(**inputs) 2025-08-14T22:04:24.2264859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2264965Z outputs = self.model( 2025-08-14T22:04:24.2265281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2265371Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2265647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2265751Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2266058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2266179Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2266494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:04:24.2266630Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:04:24.2266643Z 2025-08-14T22:04:24.2266779Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2267026Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2267110Z return mod(**inputs) 2025-08-14T22:04:24.2267424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2267509Z outputs = self.model( 2025-08-14T22:04:24.2267829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2267919Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2268224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2268332Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2268639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2268765Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2269081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:04:24.2269181Z key_states = self.k_proj(current_states) 2025-08-14T22:04:24.2269194Z 2025-08-14T22:04:24.2269347Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2269598Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2269684Z return mod(**inputs) 2025-08-14T22:04:24.2270001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2270085Z outputs = self.model( 2025-08-14T22:04:24.2270400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2270488Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2270770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2270877Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2271182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2271303Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2271653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:04:24.2271820Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:04:24.2271832Z 2025-08-14T22:04:24.2271937Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2272063Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2272310Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2272426Z return mod(**inputs) 2025-08-14T22:04:24.2272732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2272816Z outputs = self.model( 2025-08-14T22:04:24.2273133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2273224Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2273506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2273604Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2273913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2274042Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2274350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:04:24.2274464Z value_states = self.v_proj(current_states) 2025-08-14T22:04:24.2274477Z 2025-08-14T22:04:24.2274602Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2274847Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2274939Z return mod(**inputs) 2025-08-14T22:04:24.2281470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2281588Z outputs = self.model( 2025-08-14T22:04:24.2281967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2282060Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2282345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2282446Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2282756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2282890Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2283221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:04:24.2283349Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:04:24.2283363Z 2025-08-14T22:04:24.2283488Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2283737Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2283827Z return mod(**inputs) 2025-08-14T22:04:24.2284134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2284217Z outputs = self.model( 2025-08-14T22:04:24.2284535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2284624Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2284906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2285009Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2285339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:04:24.2285464Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:04:24.2285769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:04:24.2285925Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:04:24.2285943Z 2025-08-14T22:04:24.2286044Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2286161Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2286292Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2286538Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2286621Z return mod(**inputs) 2025-08-14T22:04:24.2287130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:04:24.2287262Z outputs = self.model( 2025-08-14T22:04:24.2287704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:04:24.2287846Z layer_outputs = decoder_layer( 2025-08-14T22:04:24.2288244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:04:24.2288367Z return super().__call__(*args, **kwargs) 2025-08-14T22:04:24.2288674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:04:24.2288821Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:04:24.2289094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:04:24.2289178Z return self.act(input) 2025-08-14T22:04:24.2289192Z 2025-08-14T22:04:24.2289295Z cudagraph partition due to non gpu ops 2025-08-14T22:04:24.2289418Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2289689Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2289871Z return mod(**inputs) 2025-08-14T22:04:24.2290214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 681, in forward 2025-08-14T22:04:24.2290329Z logits = self.lm_head(outputs[0]) 2025-08-14T22:04:24.2290342Z 2025-08-14T22:04:24.2290475Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:04:24.2290719Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:04:24.2290805Z return mod(**inputs) 2025-08-14T22:04:24.2291116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 685, in forward 2025-08-14T22:04:24.2291226Z loss = self.loss_function( 2025-08-14T22:04:24.2291548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-08-14T22:04:24.2291768Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-08-14T22:04:24.2292093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-08-14T22:04:24.2292343Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-08-14T22:04:24.2292358Z 2025-08-14T22:04:36.0895249Z Compilation time (from dynamo_timed): 36.947049838 2025-08-14T22:04:36.1015611Z pass 2025-08-14T22:04:36.1020999Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:04:36.1022021Z TIMING: _recursive_pre_grad_passes:0.36097 _recursive_joint_graph_passes:1.13376 _recursive_post_grad_passes:0.37646 async_compile.wait:1.06144 code_gen:10.21494 inductor_compile:16.51658 backend_compile:30.34478 gc:0.00046 entire_frame_compile:36.94705 total_wall_time:36.94705 2025-08-14T22:04:36.1023359Z STATS: call_* op count: 921 | FakeTensorMode.__torch_dispatch__:56870 | FakeTensor.__torch_dispatch__:9090 | ProxyTorchDispatchMode.__torch_dispatch__:12392 2025-08-14T22:04:36.1023985Z Dynamo produced 1 graphs covering 921 ops with 0 graph breaks (0 unique) 2025-08-14T22:04:42.8436364Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T22:04:42.8437775Z from pkg_resources import resource_filename 2025-08-14T22:04:43.5660521Z 2025-08-14T22:04:49.0307981Z loading model: 0it [00:00, ?it/s] 2025-08-14T22:04:49.0308348Z loading model: 0it [00:05, ?it/s] 2025-08-14T22:04:49.0340110Z cpu eval XLNetLMHeadModel 2025-08-14T22:04:53.3210360Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:04:55.2107404Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:04:57.0897877Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:05:35.5123153Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5123797Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5126639Z return mod(**inputs) 2025-08-14T22:05:35.5127139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5127665Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5128186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1307, in forward 2025-08-14T22:05:35.5128700Z word_emb_k = self.word_embedding(input_ids) 2025-08-14T22:05:35.5128982Z 2025-08-14T22:05:35.5129506Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5130273Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5130698Z return mod(**inputs) 2025-08-14T22:05:35.5131165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5131680Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5132222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1334, in forward 2025-08-14T22:05:35.5132785Z pos_emb = self.relative_positional_encoding(qlen, klen, bsz=bsz) 2025-08-14T22:05:35.5133551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1157, in relative_positional_encoding 2025-08-14T22:05:35.5134545Z pos_emb = self.positional_embedding(fwd_pos_seq, inv_freq, bsz) 2025-08-14T22:05:35.5135480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1115, in positional_embedding 2025-08-14T22:05:35.5136128Z pos_emb = torch.cat([torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)], dim=-1) 2025-08-14T22:05:35.5136396Z 2025-08-14T22:05:35.5136548Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5137010Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5137414Z return mod(**inputs) 2025-08-14T22:05:35.5137875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5138378Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5139003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1334, in forward 2025-08-14T22:05:35.5139702Z pos_emb = self.relative_positional_encoding(qlen, klen, bsz=bsz) 2025-08-14T22:05:35.5140319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1157, in relative_positional_encoding 2025-08-14T22:05:35.5140928Z pos_emb = self.positional_embedding(fwd_pos_seq, inv_freq, bsz) 2025-08-14T22:05:35.5141505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1115, in positional_embedding 2025-08-14T22:05:35.5142188Z pos_emb = torch.cat([torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)], dim=-1) 2025-08-14T22:05:35.5142456Z 2025-08-14T22:05:35.5142588Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5143033Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5143430Z return mod(**inputs) 2025-08-14T22:05:35.5143883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5144379Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5144865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5145345Z outputs = layer_module( 2025-08-14T22:05:35.5145804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5146281Z outputs = self.rel_attn( 2025-08-14T22:05:35.5146731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5147245Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5147439Z 2025-08-14T22:05:35.5147583Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5148025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5148425Z return mod(**inputs) 2025-08-14T22:05:35.5149318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5149830Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5150570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5151259Z outputs = layer_module( 2025-08-14T22:05:35.5151785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5152369Z outputs = self.rel_attn( 2025-08-14T22:05:35.5152923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5157624Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5157869Z 2025-08-14T22:05:35.5158013Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5158465Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5158860Z return mod(**inputs) 2025-08-14T22:05:35.5159311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5159810Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5160298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5160780Z outputs = layer_module( 2025-08-14T22:05:35.5161314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5161819Z outputs = self.rel_attn( 2025-08-14T22:05:35.5162618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5163350Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5164020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5164723Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5165025Z 2025-08-14T22:05:35.5165159Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5165709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5166166Z return mod(**inputs) 2025-08-14T22:05:35.5166700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5167265Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5167884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1334, in forward 2025-08-14T22:05:35.5168490Z pos_emb = self.relative_positional_encoding(qlen, klen, bsz=bsz) 2025-08-14T22:05:35.5169097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1157, in relative_positional_encoding 2025-08-14T22:05:35.5169713Z pos_emb = self.positional_embedding(fwd_pos_seq, inv_freq, bsz) 2025-08-14T22:05:35.5170350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1115, in positional_embedding 2025-08-14T22:05:35.5170980Z pos_emb = torch.cat([torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)], dim=-1) 2025-08-14T22:05:35.5171242Z 2025-08-14T22:05:35.5171373Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5171870Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5172274Z return mod(**inputs) 2025-08-14T22:05:35.5172722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5173259Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5173769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5174245Z outputs = layer_module( 2025-08-14T22:05:35.5174702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5175180Z outputs = self.rel_attn( 2025-08-14T22:05:35.5175638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5176185Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5176458Z 2025-08-14T22:05:35.5176594Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5177040Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5177493Z return mod(**inputs) 2025-08-14T22:05:35.5178003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5178583Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5179155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5179686Z outputs = layer_module( 2025-08-14T22:05:35.5180268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5180815Z outputs = self.rel_attn( 2025-08-14T22:05:35.5181335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5181969Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5186801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5187445Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5187734Z 2025-08-14T22:05:35.5187894Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5188420Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5188920Z return mod(**inputs) 2025-08-14T22:05:35.5189423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5190025Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5190584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5191198Z outputs = layer_module( 2025-08-14T22:05:35.5191697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5192279Z outputs = self.rel_attn( 2025-08-14T22:05:35.5192825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5193394Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5193603Z 2025-08-14T22:05:35.5193791Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5194281Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5194750Z return mod(**inputs) 2025-08-14T22:05:35.5195300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5195902Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5196458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5197052Z outputs = layer_module( 2025-08-14T22:05:35.5197583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5198066Z outputs = self.rel_attn( 2025-08-14T22:05:35.5198529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5199019Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5199513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5200091Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5200316Z 2025-08-14T22:05:35.5200483Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5200932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5201513Z return mod(**inputs) 2025-08-14T22:05:35.5202104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5202661Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5203258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5203796Z outputs = layer_module( 2025-08-14T22:05:35.5204364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5204895Z outputs = self.rel_attn( 2025-08-14T22:05:35.5205462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5206076Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5206691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5207325Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5207558Z 2025-08-14T22:05:35.5207718Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5208253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5208721Z return mod(**inputs) 2025-08-14T22:05:35.5209301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5209869Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5210437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5210998Z outputs = layer_module( 2025-08-14T22:05:35.5215673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5216212Z outputs = self.rel_attn( 2025-08-14T22:05:35.5217111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5217709Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5218330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5218944Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5219212Z 2025-08-14T22:05:35.5219325Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5219664Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5220162Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5220644Z return mod(**inputs) 2025-08-14T22:05:35.5221159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5221759Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5222356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5222902Z outputs = layer_module( 2025-08-14T22:05:35.5223445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5224227Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5225011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5226737Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5227260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5227762Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5228232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5228741Z output = self.activation_function(output) 2025-08-14T22:05:35.5229184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5229616Z return self.act(input) 2025-08-14T22:05:35.5229756Z 2025-08-14T22:05:35.5229865Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5230145Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5230770Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5231260Z return mod(**inputs) 2025-08-14T22:05:35.5231952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5232540Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5233209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5233798Z outputs = layer_module( 2025-08-14T22:05:35.5234457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5235149Z outputs = self.rel_attn( 2025-08-14T22:05:35.5235756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5236459Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5236654Z 2025-08-14T22:05:35.5236863Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5237431Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5237931Z return mod(**inputs) 2025-08-14T22:05:35.5238460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5239061Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5239624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5244483Z outputs = layer_module( 2025-08-14T22:05:35.5244947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5245496Z outputs = self.rel_attn( 2025-08-14T22:05:35.5246020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5246627Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5246829Z 2025-08-14T22:05:35.5246967Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5247549Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5248028Z return mod(**inputs) 2025-08-14T22:05:35.5248480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5249455Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5249997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5250539Z outputs = layer_module( 2025-08-14T22:05:35.5251128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5251811Z outputs = self.rel_attn( 2025-08-14T22:05:35.5252585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5253178Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5253782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5254496Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5254820Z 2025-08-14T22:05:35.5255010Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5255519Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5255913Z return mod(**inputs) 2025-08-14T22:05:35.5256363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5256855Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5257338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5257857Z outputs = layer_module( 2025-08-14T22:05:35.5258310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5258783Z outputs = self.rel_attn( 2025-08-14T22:05:35.5259264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5259900Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5260179Z 2025-08-14T22:05:35.5260318Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5260751Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5261149Z return mod(**inputs) 2025-08-14T22:05:35.5261599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5262095Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5262579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5263053Z outputs = layer_module( 2025-08-14T22:05:35.5263502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5263974Z outputs = self.rel_attn( 2025-08-14T22:05:35.5264424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5264904Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5265398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5265966Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5266209Z 2025-08-14T22:05:35.5266340Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5266781Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5267212Z return mod(**inputs) 2025-08-14T22:05:35.5267654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5268154Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5268649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5269121Z outputs = layer_module( 2025-08-14T22:05:35.5277900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5278531Z outputs = self.rel_attn( 2025-08-14T22:05:35.5279155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5279839Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5280100Z 2025-08-14T22:05:35.5280253Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5280755Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5281229Z return mod(**inputs) 2025-08-14T22:05:35.5281690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5282187Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5282678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5283150Z outputs = layer_module( 2025-08-14T22:05:35.5283610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5286338Z outputs = self.rel_attn( 2025-08-14T22:05:35.5286791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5287264Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5287757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5288357Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5288582Z 2025-08-14T22:05:35.5288748Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5289184Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5289581Z return mod(**inputs) 2025-08-14T22:05:35.5290027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5290514Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5291000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5291476Z outputs = layer_module( 2025-08-14T22:05:35.5291932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5292449Z outputs = self.rel_attn( 2025-08-14T22:05:35.5292903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5293404Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5293921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5294472Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5294691Z 2025-08-14T22:05:35.5294831Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5295274Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5295668Z return mod(**inputs) 2025-08-14T22:05:35.5296145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5296638Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5297130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5297601Z outputs = layer_module( 2025-08-14T22:05:35.5298051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5298612Z outputs = self.rel_attn( 2025-08-14T22:05:35.5299134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5299639Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5300173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5300728Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5300940Z 2025-08-14T22:05:35.5301041Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5301332Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5301776Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5302171Z return mod(**inputs) 2025-08-14T22:05:35.5302626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5303123Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5303614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5304102Z outputs = layer_module( 2025-08-14T22:05:35.5304567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5305221Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5305885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5306396Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5306873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5307358Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5307821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5308327Z output = self.activation_function(output) 2025-08-14T22:05:35.5308774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5309205Z return self.act(input) 2025-08-14T22:05:35.5309346Z 2025-08-14T22:05:35.5309443Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5309731Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5310173Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5310568Z return mod(**inputs) 2025-08-14T22:05:35.5311016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5311507Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5311995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5312463Z outputs = layer_module( 2025-08-14T22:05:35.5317151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5317689Z outputs = self.rel_attn( 2025-08-14T22:05:35.5318142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5318649Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5318851Z 2025-08-14T22:05:35.5318982Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5319431Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5319826Z return mod(**inputs) 2025-08-14T22:05:35.5320277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5320804Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5321390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5321869Z outputs = layer_module( 2025-08-14T22:05:35.5322326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5322799Z outputs = self.rel_attn( 2025-08-14T22:05:35.5323246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5323760Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5323966Z 2025-08-14T22:05:35.5324095Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5324538Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5324932Z return mod(**inputs) 2025-08-14T22:05:35.5325385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5325909Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5326405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5326874Z outputs = layer_module( 2025-08-14T22:05:35.5327395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5327928Z outputs = self.rel_attn( 2025-08-14T22:05:35.5328409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5328893Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5329392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5329976Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5330219Z 2025-08-14T22:05:35.5330350Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5330795Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5331195Z return mod(**inputs) 2025-08-14T22:05:35.5331639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5332136Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5332628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5333101Z outputs = layer_module( 2025-08-14T22:05:35.5333543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5334070Z outputs = self.rel_attn( 2025-08-14T22:05:35.5334532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5335084Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5335349Z 2025-08-14T22:05:35.5335482Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5335924Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5336327Z return mod(**inputs) 2025-08-14T22:05:35.5336767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5337267Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5337750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5338223Z outputs = layer_module( 2025-08-14T22:05:35.5338695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5339181Z outputs = self.rel_attn( 2025-08-14T22:05:35.5339633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5340106Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5340600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5341170Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5341404Z 2025-08-14T22:05:35.5341540Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5346225Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5346625Z return mod(**inputs) 2025-08-14T22:05:35.5347079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5347603Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5348088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5348564Z outputs = layer_module( 2025-08-14T22:05:35.5349368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5349840Z outputs = self.rel_attn( 2025-08-14T22:05:35.5350356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5350873Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5351067Z 2025-08-14T22:05:35.5351207Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5351641Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5352048Z return mod(**inputs) 2025-08-14T22:05:35.5352501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5353001Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5353486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5353965Z outputs = layer_module( 2025-08-14T22:05:35.5354418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5354885Z outputs = self.rel_attn( 2025-08-14T22:05:35.5355344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5355829Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5356389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5357004Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5357238Z 2025-08-14T22:05:35.5357407Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5357859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5358252Z return mod(**inputs) 2025-08-14T22:05:35.5358701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5359199Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5359689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5360157Z outputs = layer_module( 2025-08-14T22:05:35.5360644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5361263Z outputs = self.rel_attn( 2025-08-14T22:05:35.5361721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5362224Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5362750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5363312Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5363526Z 2025-08-14T22:05:35.5363658Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5364101Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5364501Z return mod(**inputs) 2025-08-14T22:05:35.5364957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5365486Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5365974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5366444Z outputs = layer_module( 2025-08-14T22:05:35.5366886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5367355Z outputs = self.rel_attn( 2025-08-14T22:05:35.5367805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5368330Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5368849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5369407Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5369618Z 2025-08-14T22:05:35.5369724Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5370014Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5370451Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5375024Z return mod(**inputs) 2025-08-14T22:05:35.5375477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5375971Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5376469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5376948Z outputs = layer_module( 2025-08-14T22:05:35.5377404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5378060Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5378734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5379281Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5379768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5380243Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5380715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5381215Z output = self.activation_function(output) 2025-08-14T22:05:35.5381653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5382076Z return self.act(input) 2025-08-14T22:05:35.5382215Z 2025-08-14T22:05:35.5382348Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5382644Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5383087Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5383491Z return mod(**inputs) 2025-08-14T22:05:35.5383949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5384444Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5384933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5385490Z outputs = layer_module( 2025-08-14T22:05:35.5385993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5386458Z outputs = self.rel_attn( 2025-08-14T22:05:35.5386915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5387451Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5387642Z 2025-08-14T22:05:35.5387775Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5388222Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5388622Z return mod(**inputs) 2025-08-14T22:05:35.5389065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5389576Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5390125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5390607Z outputs = layer_module( 2025-08-14T22:05:35.5391063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5391533Z outputs = self.rel_attn( 2025-08-14T22:05:35.5391985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5392497Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5392694Z 2025-08-14T22:05:35.5392823Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5393262Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5393660Z return mod(**inputs) 2025-08-14T22:05:35.5394107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5394593Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5395084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5395560Z outputs = layer_module( 2025-08-14T22:05:35.5396011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5396486Z outputs = self.rel_attn( 2025-08-14T22:05:35.5396966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5397449Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5397941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5398521Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5398758Z 2025-08-14T22:05:35.5398897Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5399340Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5408156Z return mod(**inputs) 2025-08-14T22:05:35.5408759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5409421Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5410074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5410726Z outputs = layer_module( 2025-08-14T22:05:35.5411183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5411667Z outputs = self.rel_attn( 2025-08-14T22:05:35.5412117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5412671Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5412907Z 2025-08-14T22:05:35.5413045Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5413513Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5413913Z return mod(**inputs) 2025-08-14T22:05:35.5414434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5414983Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5415467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5415945Z outputs = layer_module( 2025-08-14T22:05:35.5416424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5416897Z outputs = self.rel_attn( 2025-08-14T22:05:35.5417340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5417820Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5418315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5418928Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5419173Z 2025-08-14T22:05:35.5419304Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5419745Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5420145Z return mod(**inputs) 2025-08-14T22:05:35.5420585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5421082Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5421567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5422046Z outputs = layer_module( 2025-08-14T22:05:35.5422493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5422963Z outputs = self.rel_attn( 2025-08-14T22:05:35.5423442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5423948Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5424146Z 2025-08-14T22:05:35.5424275Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5424718Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5425111Z return mod(**inputs) 2025-08-14T22:05:35.5425552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5426043Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5426559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5427038Z outputs = layer_module( 2025-08-14T22:05:35.5427495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5427973Z outputs = self.rel_attn( 2025-08-14T22:05:35.5428426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5428979Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5429530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5430095Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5430320Z 2025-08-14T22:05:35.5430459Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5430901Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5431329Z return mod(**inputs) 2025-08-14T22:05:35.5431780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5432271Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5432768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5433251Z outputs = layer_module( 2025-08-14T22:05:35.5433736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5434200Z outputs = self.rel_attn( 2025-08-14T22:05:35.5434656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5435158Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5435679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5436239Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5436460Z 2025-08-14T22:05:35.5436590Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5437039Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5437434Z return mod(**inputs) 2025-08-14T22:05:35.5437885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5438387Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5438880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5439347Z outputs = layer_module( 2025-08-14T22:05:35.5439801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5440279Z outputs = self.rel_attn( 2025-08-14T22:05:35.5440750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5441316Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5441844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5442403Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5442614Z 2025-08-14T22:05:35.5442714Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5443006Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5449954Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5450407Z return mod(**inputs) 2025-08-14T22:05:35.5450858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5451351Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5451850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5452476Z outputs = layer_module( 2025-08-14T22:05:35.5452928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5453582Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5454238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5454729Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5455219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5455744Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5456204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5456703Z output = self.activation_function(output) 2025-08-14T22:05:35.5457149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5457575Z return self.act(input) 2025-08-14T22:05:35.5457834Z 2025-08-14T22:05:35.5457933Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5458276Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5458725Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5459130Z return mod(**inputs) 2025-08-14T22:05:35.5459590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5460092Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5460589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5461059Z outputs = layer_module( 2025-08-14T22:05:35.5461515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5461992Z outputs = self.rel_attn( 2025-08-14T22:05:35.5462446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5462950Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5463150Z 2025-08-14T22:05:35.5463285Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5463727Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5464126Z return mod(**inputs) 2025-08-14T22:05:35.5464579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5465109Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5465602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5466069Z outputs = layer_module( 2025-08-14T22:05:35.5466522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5466997Z outputs = self.rel_attn( 2025-08-14T22:05:35.5467442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5467955Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5468157Z 2025-08-14T22:05:35.5468313Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5468758Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5469149Z return mod(**inputs) 2025-08-14T22:05:35.5469601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5470091Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5470577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5471044Z outputs = layer_module( 2025-08-14T22:05:35.5471493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5471965Z outputs = self.rel_attn( 2025-08-14T22:05:35.5476607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5477119Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5477615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5491884Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5492282Z 2025-08-14T22:05:35.5492442Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5492920Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5493468Z return mod(**inputs) 2025-08-14T22:05:35.5493988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5494510Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5495022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5495515Z outputs = layer_module( 2025-08-14T22:05:35.5495971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5496458Z outputs = self.rel_attn( 2025-08-14T22:05:35.5496923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5497492Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5497743Z 2025-08-14T22:05:35.5497879Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5498335Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5498743Z return mod(**inputs) 2025-08-14T22:05:35.5499191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5499698Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5500198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5500684Z outputs = layer_module( 2025-08-14T22:05:35.5505496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5505998Z outputs = self.rel_attn( 2025-08-14T22:05:35.5506466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5506949Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5507455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5508045Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5508281Z 2025-08-14T22:05:35.5508453Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5508898Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5509306Z return mod(**inputs) 2025-08-14T22:05:35.5509762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5510263Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5510748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5511232Z outputs = layer_module( 2025-08-14T22:05:35.5511687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5512154Z outputs = self.rel_attn( 2025-08-14T22:05:35.5512619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5513170Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5513371Z 2025-08-14T22:05:35.5513511Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5513953Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5514355Z return mod(**inputs) 2025-08-14T22:05:35.5514806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5515308Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5515916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5516451Z outputs = layer_module( 2025-08-14T22:05:35.5516902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5517374Z outputs = self.rel_attn( 2025-08-14T22:05:35.5517843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5518330Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5518833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5519399Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5519631Z 2025-08-14T22:05:35.5519769Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5520271Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5520670Z return mod(**inputs) 2025-08-14T22:05:35.5521122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5521716Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5522220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5522689Z outputs = layer_module( 2025-08-14T22:05:35.5523176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5523663Z outputs = self.rel_attn( 2025-08-14T22:05:35.5524116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5524627Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5525159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5525720Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5525935Z 2025-08-14T22:05:35.5526089Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5526535Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5526946Z return mod(**inputs) 2025-08-14T22:05:35.5527404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5527895Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5528383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5528858Z outputs = layer_module( 2025-08-14T22:05:35.5529307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5529781Z outputs = self.rel_attn( 2025-08-14T22:05:35.5534426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5534930Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5535479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5536039Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5536252Z 2025-08-14T22:05:35.5536367Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5536658Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5537101Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5537528Z return mod(**inputs) 2025-08-14T22:05:35.5537982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5538471Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5538962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5539441Z outputs = layer_module( 2025-08-14T22:05:35.5539893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5540547Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5541210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5541709Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5542193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5542662Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5543130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5543631Z output = self.activation_function(output) 2025-08-14T22:05:35.5544073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5544496Z return self.act(input) 2025-08-14T22:05:35.5544637Z 2025-08-14T22:05:35.5544831Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5545144Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5545608Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5546014Z return mod(**inputs) 2025-08-14T22:05:35.5546463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5546953Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5547440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5547912Z outputs = layer_module( 2025-08-14T22:05:35.5548417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5549227Z outputs = self.rel_attn( 2025-08-14T22:05:35.5549871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5550382Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5550575Z 2025-08-14T22:05:35.5550708Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5551152Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5551554Z return mod(**inputs) 2025-08-14T22:05:35.5551992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5552481Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5552968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5553504Z outputs = layer_module( 2025-08-14T22:05:35.5553951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5554424Z outputs = self.rel_attn( 2025-08-14T22:05:35.5554872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5555382Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5555613Z 2025-08-14T22:05:35.5555741Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5556186Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5556584Z return mod(**inputs) 2025-08-14T22:05:35.5557021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5557514Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5557998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5558475Z outputs = layer_module( 2025-08-14T22:05:35.5558920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5567744Z outputs = self.rel_attn( 2025-08-14T22:05:35.5568338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5568970Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5569623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5570410Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5570723Z 2025-08-14T22:05:35.5570885Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5571361Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5571797Z return mod(**inputs) 2025-08-14T22:05:35.5572243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5572737Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5573211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5575883Z outputs = layer_module( 2025-08-14T22:05:35.5576341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5576808Z outputs = self.rel_attn( 2025-08-14T22:05:35.5577306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5577863Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5578102Z 2025-08-14T22:05:35.5578295Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5578734Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5579133Z return mod(**inputs) 2025-08-14T22:05:35.5579582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5580073Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5580560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5581039Z outputs = layer_module( 2025-08-14T22:05:35.5581499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5581998Z outputs = self.rel_attn( 2025-08-14T22:05:35.5582452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5582930Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5583429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5583999Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5584260Z 2025-08-14T22:05:35.5584390Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5584832Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5585225Z return mod(**inputs) 2025-08-14T22:05:35.5585688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5586184Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5586665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5587138Z outputs = layer_module( 2025-08-14T22:05:35.5587589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5588058Z outputs = self.rel_attn( 2025-08-14T22:05:35.5588580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5589142Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5589335Z 2025-08-14T22:05:35.5589468Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5589902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5590299Z return mod(**inputs) 2025-08-14T22:05:35.5590745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5591242Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5591750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5592228Z outputs = layer_module( 2025-08-14T22:05:35.5592680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5593147Z outputs = self.rel_attn( 2025-08-14T22:05:35.5593596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5594071Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5594587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5595141Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5595372Z 2025-08-14T22:05:35.5595500Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5595938Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5596332Z return mod(**inputs) 2025-08-14T22:05:35.5596768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5597261Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5597751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5598214Z outputs = layer_module( 2025-08-14T22:05:35.5598671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5599180Z outputs = self.rel_attn( 2025-08-14T22:05:35.5599629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5600122Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5600643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5601259Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5601472Z 2025-08-14T22:05:35.5601608Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5602078Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5602480Z return mod(**inputs) 2025-08-14T22:05:35.5607176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5607712Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5608204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5608680Z outputs = layer_module( 2025-08-14T22:05:35.5609133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5609601Z outputs = self.rel_attn( 2025-08-14T22:05:35.5610051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5610549Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5611071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5611675Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5611891Z 2025-08-14T22:05:35.5611998Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5612287Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5612723Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5613144Z return mod(**inputs) 2025-08-14T22:05:35.5613592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5614086Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5614565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5615037Z outputs = layer_module( 2025-08-14T22:05:35.5615487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5616160Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5616827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5617386Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5617913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5618387Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5618852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5619347Z output = self.activation_function(output) 2025-08-14T22:05:35.5619837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5620253Z return self.act(input) 2025-08-14T22:05:35.5620394Z 2025-08-14T22:05:35.5620491Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5620776Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5621238Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5621635Z return mod(**inputs) 2025-08-14T22:05:35.5622080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5622572Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5623053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5623549Z outputs = layer_module( 2025-08-14T22:05:35.5623996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5624466Z outputs = self.rel_attn( 2025-08-14T22:05:35.5624916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5625430Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5625620Z 2025-08-14T22:05:35.5625754Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5626189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5626591Z return mod(**inputs) 2025-08-14T22:05:35.5627037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5627526Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5628015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5628485Z outputs = layer_module( 2025-08-14T22:05:35.5628937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5629411Z outputs = self.rel_attn( 2025-08-14T22:05:35.5629868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5630375Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5630597Z 2025-08-14T22:05:35.5630730Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5631165Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5631562Z return mod(**inputs) 2025-08-14T22:05:35.5636235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5636734Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5637230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5637705Z outputs = layer_module( 2025-08-14T22:05:35.5638188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5638661Z outputs = self.rel_attn( 2025-08-14T22:05:35.5639121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5639597Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5640080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5640650Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5640890Z 2025-08-14T22:05:35.5641021Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5641534Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5641929Z return mod(**inputs) 2025-08-14T22:05:35.5642379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5642900Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5643389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5643857Z outputs = layer_module( 2025-08-14T22:05:35.5644303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5644773Z outputs = self.rel_attn( 2025-08-14T22:05:35.5645250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5645801Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5646041Z 2025-08-14T22:05:35.5646236Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5646722Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5647115Z return mod(**inputs) 2025-08-14T22:05:35.5647556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5648047Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5648523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5649312Z outputs = layer_module( 2025-08-14T22:05:35.5649763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5650233Z outputs = self.rel_attn( 2025-08-14T22:05:35.5650681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5651325Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5651820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5652391Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5652625Z 2025-08-14T22:05:35.5652806Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5653251Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5653649Z return mod(**inputs) 2025-08-14T22:05:35.5654086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5654581Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5655071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5655547Z outputs = layer_module( 2025-08-14T22:05:35.5656023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5656495Z outputs = self.rel_attn( 2025-08-14T22:05:35.5656952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5657461Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5657655Z 2025-08-14T22:05:35.5657782Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5658220Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5658616Z return mod(**inputs) 2025-08-14T22:05:35.5659052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5659543Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5660028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5660527Z outputs = layer_module( 2025-08-14T22:05:35.5665162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5665642Z outputs = self.rel_attn( 2025-08-14T22:05:35.5666098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5666567Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5667063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5667668Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5667891Z 2025-08-14T22:05:35.5668026Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5668465Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5668865Z return mod(**inputs) 2025-08-14T22:05:35.5669313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5669803Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5670284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5670751Z outputs = layer_module( 2025-08-14T22:05:35.5671196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5671663Z outputs = self.rel_attn( 2025-08-14T22:05:35.5672110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5672608Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5673140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5673687Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5673904Z 2025-08-14T22:05:35.5674057Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5674497Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5674892Z return mod(**inputs) 2025-08-14T22:05:35.5675407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5675962Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5676451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5676922Z outputs = layer_module( 2025-08-14T22:05:35.5677412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5677892Z outputs = self.rel_attn( 2025-08-14T22:05:35.5678348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5678838Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5679364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5679969Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5680183Z 2025-08-14T22:05:35.5680286Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5680576Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5681015Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5681502Z return mod(**inputs) 2025-08-14T22:05:35.5681953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5682476Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5682969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5683439Z outputs = layer_module( 2025-08-14T22:05:35.5683894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5684560Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5685251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5685739Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5686225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5686714Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5687178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5687670Z output = self.activation_function(output) 2025-08-14T22:05:35.5688113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5688536Z return self.act(input) 2025-08-14T22:05:35.5688672Z 2025-08-14T22:05:35.5688772Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5689070Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5689514Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5694145Z return mod(**inputs) 2025-08-14T22:05:35.5694589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5695093Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5695583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5696083Z outputs = layer_module( 2025-08-14T22:05:35.5696540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5697023Z outputs = self.rel_attn( 2025-08-14T22:05:35.5697479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5697991Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5698189Z 2025-08-14T22:05:35.5698317Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5698758Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5699184Z return mod(**inputs) 2025-08-14T22:05:35.5699626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5700117Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5700603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5701073Z outputs = layer_module( 2025-08-14T22:05:35.5701523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5701999Z outputs = self.rel_attn( 2025-08-14T22:05:35.5702455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5702959Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5703158Z 2025-08-14T22:05:35.5703288Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5703758Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5704153Z return mod(**inputs) 2025-08-14T22:05:35.5704718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5705216Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5705701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5706169Z outputs = layer_module( 2025-08-14T22:05:35.5706648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5707123Z outputs = self.rel_attn( 2025-08-14T22:05:35.5707581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5708056Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5708555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5709185Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5709422Z 2025-08-14T22:05:35.5709551Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5709994Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5710389Z return mod(**inputs) 2025-08-14T22:05:35.5710840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5711330Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5711824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5712301Z outputs = layer_module( 2025-08-14T22:05:35.5712750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5713227Z outputs = self.rel_attn( 2025-08-14T22:05:35.5713709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5714266Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5714504Z 2025-08-14T22:05:35.5714633Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5715075Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5715470Z return mod(**inputs) 2025-08-14T22:05:35.5715918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5716407Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5716918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5717394Z outputs = layer_module( 2025-08-14T22:05:35.5717842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5718312Z outputs = self.rel_attn( 2025-08-14T22:05:35.5727192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5727836Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5728484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5729258Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5729563Z 2025-08-14T22:05:35.5729726Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5730311Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5730833Z return mod(**inputs) 2025-08-14T22:05:35.5731285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5731781Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5732261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5732732Z outputs = layer_module( 2025-08-14T22:05:35.5735343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5735822Z outputs = self.rel_attn( 2025-08-14T22:05:35.5736269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5736784Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5736981Z 2025-08-14T22:05:35.5737117Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5737550Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5738005Z return mod(**inputs) 2025-08-14T22:05:35.5738455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5738949Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5739429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5739905Z outputs = layer_module( 2025-08-14T22:05:35.5740361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5740831Z outputs = self.rel_attn( 2025-08-14T22:05:35.5741282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5741767Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5742280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5742832Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5743062Z 2025-08-14T22:05:35.5743192Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5743635Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5744035Z return mod(**inputs) 2025-08-14T22:05:35.5744473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5744974Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5745493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5745582Z outputs = layer_module( 2025-08-14T22:05:35.5745915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5746001Z outputs = self.rel_attn( 2025-08-14T22:05:35.5746319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5746435Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5746783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5746922Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5746934Z 2025-08-14T22:05:35.5747070Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5747322Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5747434Z return mod(**inputs) 2025-08-14T22:05:35.5747825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5747931Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5748312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5748396Z outputs = layer_module( 2025-08-14T22:05:35.5749029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5749182Z outputs = self.rel_attn( 2025-08-14T22:05:35.5749502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5749618Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5749967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5750105Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5750118Z 2025-08-14T22:05:35.5750229Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5750357Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5750613Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5750695Z return mod(**inputs) 2025-08-14T22:05:35.5751018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5751125Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5751443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5751530Z outputs = layer_module( 2025-08-14T22:05:35.5751855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5752160Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5752503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5752600Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5752917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5753016Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5753331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5753441Z output = self.activation_function(output) 2025-08-14T22:05:35.5753735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5753824Z return self.act(input) 2025-08-14T22:05:35.5753836Z 2025-08-14T22:05:35.5753943Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5754072Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5754321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5754407Z return mod(**inputs) 2025-08-14T22:05:35.5754729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5754838Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5755155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5755244Z outputs = layer_module( 2025-08-14T22:05:35.5755572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5755688Z outputs = self.rel_attn( 2025-08-14T22:05:35.5756007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5756136Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5756149Z 2025-08-14T22:05:35.5756275Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5756527Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5756632Z return mod(**inputs) 2025-08-14T22:05:35.5756950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5757056Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5757377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5757468Z outputs = layer_module( 2025-08-14T22:05:35.5757780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5757866Z outputs = self.rel_attn( 2025-08-14T22:05:35.5758189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5758312Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5758325Z 2025-08-14T22:05:35.5758455Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5758713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5758792Z return mod(**inputs) 2025-08-14T22:05:35.5759118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5759220Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5759542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5759634Z outputs = layer_module( 2025-08-14T22:05:35.5759976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5760067Z outputs = self.rel_attn( 2025-08-14T22:05:35.5760383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5760474Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5760825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5760987Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5760999Z 2025-08-14T22:05:35.5761201Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5761483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5761564Z return mod(**inputs) 2025-08-14T22:05:35.5761891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5761991Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5766506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5766607Z outputs = layer_module( 2025-08-14T22:05:35.5766966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5767060Z outputs = self.rel_attn( 2025-08-14T22:05:35.5767379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5767575Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5767588Z 2025-08-14T22:05:35.5767723Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5767975Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5768057Z return mod(**inputs) 2025-08-14T22:05:35.5768388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5768491Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5768844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5768929Z outputs = layer_module( 2025-08-14T22:05:35.5769248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5769346Z outputs = self.rel_attn( 2025-08-14T22:05:35.5769672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5769763Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5770108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5770267Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5770280Z 2025-08-14T22:05:35.5770410Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5770656Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5770736Z return mod(**inputs) 2025-08-14T22:05:35.5771113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5771220Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5771545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5771627Z outputs = layer_module( 2025-08-14T22:05:35.5771968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5772060Z outputs = self.rel_attn( 2025-08-14T22:05:35.5772376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5772498Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5772518Z 2025-08-14T22:05:35.5772640Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5772885Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5772970Z return mod(**inputs) 2025-08-14T22:05:35.5773314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5773421Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5773752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5773834Z outputs = layer_module( 2025-08-14T22:05:35.5774156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5774238Z outputs = self.rel_attn( 2025-08-14T22:05:35.5774559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5774652Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5774991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5775147Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5775187Z 2025-08-14T22:05:35.5775315Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5775563Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5775652Z return mod(**inputs) 2025-08-14T22:05:35.5775968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5776071Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5776414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5776496Z outputs = layer_module( 2025-08-14T22:05:35.5776889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5776972Z outputs = self.rel_attn( 2025-08-14T22:05:35.5777337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5777455Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5777799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5777937Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5777955Z 2025-08-14T22:05:35.5778079Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5778329Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5778416Z return mod(**inputs) 2025-08-14T22:05:35.5778734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5778837Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5779162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5779270Z outputs = layer_module( 2025-08-14T22:05:35.5779655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5779740Z outputs = self.rel_attn( 2025-08-14T22:05:35.5780058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5780177Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5780523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5780659Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5780672Z 2025-08-14T22:05:35.5780777Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5780926Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5781185Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5781266Z return mod(**inputs) 2025-08-14T22:05:35.5781587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5781694Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5782012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5782096Z outputs = layer_module( 2025-08-14T22:05:35.5782419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5782683Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5783022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5783141Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5783465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5783563Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5783882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5783998Z output = self.activation_function(output) 2025-08-14T22:05:35.5784300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5784385Z return self.act(input) 2025-08-14T22:05:35.5784398Z 2025-08-14T22:05:35.5784507Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5784636Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5784889Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5784977Z return mod(**inputs) 2025-08-14T22:05:35.5785297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5785404Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5785721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5785804Z outputs = layer_module( 2025-08-14T22:05:35.5786126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5786212Z outputs = self.rel_attn( 2025-08-14T22:05:35.5786533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5786659Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5786674Z 2025-08-14T22:05:35.5786799Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5787053Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5787153Z return mod(**inputs) 2025-08-14T22:05:35.5787476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5787599Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5787920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5788014Z outputs = layer_module( 2025-08-14T22:05:35.5788329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5788413Z outputs = self.rel_attn( 2025-08-14T22:05:35.5788759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5788887Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5788900Z 2025-08-14T22:05:35.5789034Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5789281Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5789361Z return mod(**inputs) 2025-08-14T22:05:35.5789689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5789792Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5790109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5790194Z outputs = layer_module( 2025-08-14T22:05:35.5790510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5790624Z outputs = self.rel_attn( 2025-08-14T22:05:35.5790939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5791030Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5795617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5795784Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5795822Z 2025-08-14T22:05:35.5795954Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5796202Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5796281Z return mod(**inputs) 2025-08-14T22:05:35.5796610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5796712Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5797031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5797122Z outputs = layer_module( 2025-08-14T22:05:35.5797443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5797536Z outputs = self.rel_attn( 2025-08-14T22:05:35.5797857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5798021Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5798033Z 2025-08-14T22:05:35.5798169Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5798417Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5798501Z return mod(**inputs) 2025-08-14T22:05:35.5798836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5798937Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5799286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5799371Z outputs = layer_module( 2025-08-14T22:05:35.5799687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5799776Z outputs = self.rel_attn( 2025-08-14T22:05:35.5800099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5800195Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5800559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5800719Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5800731Z 2025-08-14T22:05:35.5800863Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5801111Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5801263Z return mod(**inputs) 2025-08-14T22:05:35.5801594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5801697Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5802021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5802103Z outputs = layer_module( 2025-08-14T22:05:35.5802418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5802535Z outputs = self.rel_attn( 2025-08-14T22:05:35.5802855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5802986Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5802998Z 2025-08-14T22:05:35.5803129Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5803382Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5803472Z return mod(**inputs) 2025-08-14T22:05:35.5803812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5803915Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5804238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5804324Z outputs = layer_module( 2025-08-14T22:05:35.5804656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5804738Z outputs = self.rel_attn( 2025-08-14T22:05:35.5805060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5805156Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5805493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5805646Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5805730Z 2025-08-14T22:05:35.5805860Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5806131Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5806243Z return mod(**inputs) 2025-08-14T22:05:35.5806565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5806669Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5807014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5807101Z outputs = layer_module( 2025-08-14T22:05:35.5807426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5807515Z outputs = self.rel_attn( 2025-08-14T22:05:35.5807831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5807957Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5808322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5808463Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5808488Z 2025-08-14T22:05:35.5808615Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5808865Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5808955Z return mod(**inputs) 2025-08-14T22:05:35.5809276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5809376Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5809713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5809796Z outputs = layer_module( 2025-08-14T22:05:35.5810124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5810211Z outputs = self.rel_attn( 2025-08-14T22:05:35.5810601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5810718Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5811062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5811198Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5811218Z 2025-08-14T22:05:35.5811316Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5811463Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5811717Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5811800Z return mod(**inputs) 2025-08-14T22:05:35.5812124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5812236Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5812556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5812646Z outputs = layer_module( 2025-08-14T22:05:35.5812964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5813227Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5813567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5813662Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5813982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5814078Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5814397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5814506Z output = self.activation_function(output) 2025-08-14T22:05:35.5814795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5814883Z return self.act(input) 2025-08-14T22:05:35.5814895Z 2025-08-14T22:05:35.5814997Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5815124Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5815372Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5815461Z return mod(**inputs) 2025-08-14T22:05:35.5815781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5815887Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5816225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5816312Z outputs = layer_module( 2025-08-14T22:05:35.5816641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5816724Z outputs = self.rel_attn( 2025-08-14T22:05:35.5817041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5817160Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5817174Z 2025-08-14T22:05:35.5817298Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5817551Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5817630Z return mod(**inputs) 2025-08-14T22:05:35.5817945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5818088Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5818406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5818494Z outputs = layer_module( 2025-08-14T22:05:35.5818814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5818899Z outputs = self.rel_attn( 2025-08-14T22:05:35.5819240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5819363Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5819376Z 2025-08-14T22:05:35.5819506Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5819752Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5819833Z return mod(**inputs) 2025-08-14T22:05:35.5820159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5824482Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5824802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5824893Z outputs = layer_module( 2025-08-14T22:05:35.5825210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5825303Z outputs = self.rel_attn( 2025-08-14T22:05:35.5825681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5825772Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5826129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5826297Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5826310Z 2025-08-14T22:05:35.5826469Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5826717Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5826799Z return mod(**inputs) 2025-08-14T22:05:35.5827123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5827227Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5827544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5827633Z outputs = layer_module( 2025-08-14T22:05:35.5827970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5828068Z outputs = self.rel_attn( 2025-08-14T22:05:35.5828384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5828550Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5828563Z 2025-08-14T22:05:35.5828698Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5828949Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5829031Z return mod(**inputs) 2025-08-14T22:05:35.5829357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5829459Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5829788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5829895Z outputs = layer_module( 2025-08-14T22:05:35.5830211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5830304Z outputs = self.rel_attn( 2025-08-14T22:05:35.5830624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5830719Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5831053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5831238Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5831250Z 2025-08-14T22:05:35.5831383Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5831632Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5831713Z return mod(**inputs) 2025-08-14T22:05:35.5832042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5832144Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5832466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5832547Z outputs = layer_module( 2025-08-14T22:05:35.5832861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5832952Z outputs = self.rel_attn( 2025-08-14T22:05:35.5833265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5833394Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5833407Z 2025-08-14T22:05:35.5833536Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5833782Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5833873Z return mod(**inputs) 2025-08-14T22:05:35.5834210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5834310Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5834638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5834793Z outputs = layer_module( 2025-08-14T22:05:35.5835145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5835243Z outputs = self.rel_attn( 2025-08-14T22:05:35.5835585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5835682Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5836020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5836180Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5836192Z 2025-08-14T22:05:35.5836319Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5836565Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5836655Z return mod(**inputs) 2025-08-14T22:05:35.5836978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5837079Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5837403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5837522Z outputs = layer_module( 2025-08-14T22:05:35.5837843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5837928Z outputs = self.rel_attn( 2025-08-14T22:05:35.5838246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5838361Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5838704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5838866Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5838885Z 2025-08-14T22:05:35.5839010Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5839279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5839394Z return mod(**inputs) 2025-08-14T22:05:35.5839712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5839813Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5840139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5840221Z outputs = layer_module( 2025-08-14T22:05:35.5840539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5840621Z outputs = self.rel_attn( 2025-08-14T22:05:35.5840940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5841053Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5841470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5841611Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5841631Z 2025-08-14T22:05:35.5841729Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5841874Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5842129Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5842213Z return mod(**inputs) 2025-08-14T22:05:35.5842534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5842643Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5842965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5843057Z outputs = layer_module( 2025-08-14T22:05:35.5843398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5843660Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5844001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5844096Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5844413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5844513Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5844829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5844946Z output = self.activation_function(output) 2025-08-14T22:05:35.5845216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5845326Z return self.act(input) 2025-08-14T22:05:35.5845338Z 2025-08-14T22:05:35.5845440Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5845570Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5845828Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5845909Z return mod(**inputs) 2025-08-14T22:05:35.5846226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5846363Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5846679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5846761Z outputs = layer_module( 2025-08-14T22:05:35.5847086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5847175Z outputs = self.rel_attn( 2025-08-14T22:05:35.5847494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5847619Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5847632Z 2025-08-14T22:05:35.5847757Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5848011Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5848091Z return mod(**inputs) 2025-08-14T22:05:35.5848409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5848524Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5857419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5857523Z outputs = layer_module( 2025-08-14T22:05:35.5857955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5858045Z outputs = self.rel_attn( 2025-08-14T22:05:35.5858538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5858684Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5858698Z 2025-08-14T22:05:35.5858852Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5859182Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5859268Z return mod(**inputs) 2025-08-14T22:05:35.5859713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5859827Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5860185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5860286Z outputs = layer_module( 2025-08-14T22:05:35.5860602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5860696Z outputs = self.rel_attn( 2025-08-14T22:05:35.5861014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5861104Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5861454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5861618Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5861631Z 2025-08-14T22:05:35.5861763Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5862012Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5862122Z return mod(**inputs) 2025-08-14T22:05:35.5862447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5862545Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5862865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5862954Z outputs = layer_module( 2025-08-14T22:05:35.5863298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5863385Z outputs = self.rel_attn( 2025-08-14T22:05:35.5863769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5863941Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5863957Z 2025-08-14T22:05:35.5864104Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5864384Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5864472Z return mod(**inputs) 2025-08-14T22:05:35.5864788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5864887Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5865207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5865290Z outputs = layer_module( 2025-08-14T22:05:35.5865604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5865694Z outputs = self.rel_attn( 2025-08-14T22:05:35.5866008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5866105Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5866466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5866629Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5866642Z 2025-08-14T22:05:35.5866774Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5867021Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5867107Z return mod(**inputs) 2025-08-14T22:05:35.5867430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5867531Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5867873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5867958Z outputs = layer_module( 2025-08-14T22:05:35.5868322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5868412Z outputs = self.rel_attn( 2025-08-14T22:05:35.5868727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5868852Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5868867Z 2025-08-14T22:05:35.5868994Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5869237Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5869320Z return mod(**inputs) 2025-08-14T22:05:35.5869636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5869759Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5870084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5870170Z outputs = layer_module( 2025-08-14T22:05:35.5870490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5870574Z outputs = self.rel_attn( 2025-08-14T22:05:35.5870892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5871005Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5871341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5871499Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5871514Z 2025-08-14T22:05:35.5871643Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5871889Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5871976Z return mod(**inputs) 2025-08-14T22:05:35.5872298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5872400Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5872728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5872814Z outputs = layer_module( 2025-08-14T22:05:35.5873134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5873216Z outputs = self.rel_attn( 2025-08-14T22:05:35.5873533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5873651Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5874020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5874161Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5874174Z 2025-08-14T22:05:35.5886297Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5886608Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5886710Z return mod(**inputs) 2025-08-14T22:05:35.5887077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5887187Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5887616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5887708Z outputs = layer_module( 2025-08-14T22:05:35.5888033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5888131Z outputs = self.rel_attn( 2025-08-14T22:05:35.5888451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5888571Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5888924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5889070Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5889085Z 2025-08-14T22:05:35.5889191Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5889324Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5889585Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5890466Z return mod(**inputs) 2025-08-14T22:05:35.5890793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5890902Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5891222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5891306Z outputs = layer_module( 2025-08-14T22:05:35.5891659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5891923Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5892266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5892364Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5899189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5899294Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5899616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5899728Z output = self.activation_function(output) 2025-08-14T22:05:35.5899999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5900086Z return self.act(input) 2025-08-14T22:05:35.5900101Z 2025-08-14T22:05:35.5900206Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5900334Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5900587Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5900675Z return mod(**inputs) 2025-08-14T22:05:35.5900995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5901132Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5901480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5901589Z outputs = layer_module( 2025-08-14T22:05:35.5901910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5901999Z outputs = self.rel_attn( 2025-08-14T22:05:35.5902315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5902442Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5902455Z 2025-08-14T22:05:35.5902605Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5902861Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5902942Z return mod(**inputs) 2025-08-14T22:05:35.5903257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5903363Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5903680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5903769Z outputs = layer_module( 2025-08-14T22:05:35.5904082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5904166Z outputs = self.rel_attn( 2025-08-14T22:05:35.5904489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5904637Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5904651Z 2025-08-14T22:05:35.5904776Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5905032Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5905111Z return mod(**inputs) 2025-08-14T22:05:35.5905429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5905529Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5905874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5905967Z outputs = layer_module( 2025-08-14T22:05:35.5906288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5906388Z outputs = self.rel_attn( 2025-08-14T22:05:35.5906711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5906807Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5907235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5907409Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5907422Z 2025-08-14T22:05:35.5907555Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5907873Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5907959Z return mod(**inputs) 2025-08-14T22:05:35.5908286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5908393Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5908720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5908813Z outputs = layer_module( 2025-08-14T22:05:35.5909167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5909253Z outputs = self.rel_attn( 2025-08-14T22:05:35.5909583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5909755Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5909768Z 2025-08-14T22:05:35.5909906Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5910156Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5910238Z return mod(**inputs) 2025-08-14T22:05:35.5910591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5910700Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5911030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5911117Z outputs = layer_module( 2025-08-14T22:05:35.5911437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5911533Z outputs = self.rel_attn( 2025-08-14T22:05:35.5911852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5911942Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5912295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5912460Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5912494Z 2025-08-14T22:05:35.5912634Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5912889Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5912972Z return mod(**inputs) 2025-08-14T22:05:35.5913301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5913404Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5913761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5913846Z outputs = layer_module( 2025-08-14T22:05:35.5914160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5914252Z outputs = self.rel_attn( 2025-08-14T22:05:35.5914573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5914697Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5914719Z 2025-08-14T22:05:35.5914849Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5915097Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5915185Z return mod(**inputs) 2025-08-14T22:05:35.5915505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5915609Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5915932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5916014Z outputs = layer_module( 2025-08-14T22:05:35.5916339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5916430Z outputs = self.rel_attn( 2025-08-14T22:05:35.5916771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5916869Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5917210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5917363Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5917379Z 2025-08-14T22:05:35.5917514Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5917759Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5917847Z return mod(**inputs) 2025-08-14T22:05:35.5918189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5918293Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5918622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5918705Z outputs = layer_module( 2025-08-14T22:05:35.5919032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5919114Z outputs = self.rel_attn( 2025-08-14T22:05:35.5919435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5919553Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5919899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5920042Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5920078Z 2025-08-14T22:05:35.5920212Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5920462Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5920551Z return mod(**inputs) 2025-08-14T22:05:35.5920872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5920972Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5921404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5921515Z outputs = layer_module( 2025-08-14T22:05:35.5926017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5926113Z outputs = self.rel_attn( 2025-08-14T22:05:35.5926434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5926557Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5926901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5927038Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5927051Z 2025-08-14T22:05:35.5927160Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5927289Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5927550Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5927632Z return mod(**inputs) 2025-08-14T22:05:35.5927952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5928062Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5928381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5928464Z outputs = layer_module( 2025-08-14T22:05:35.5928821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5929090Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5929432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5929530Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5929852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5929950Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5930290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5930407Z output = self.activation_function(output) 2025-08-14T22:05:35.5930680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5930768Z return self.act(input) 2025-08-14T22:05:35.5930780Z 2025-08-14T22:05:35.5930889Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5931021Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5931272Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5931363Z return mod(**inputs) 2025-08-14T22:05:35.5931685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5931793Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5932118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5932230Z outputs = layer_module( 2025-08-14T22:05:35.5932560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5932647Z outputs = self.rel_attn( 2025-08-14T22:05:35.5932963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5933090Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5933123Z 2025-08-14T22:05:35.5933249Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5933505Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5933589Z return mod(**inputs) 2025-08-14T22:05:35.5933910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5934021Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5934343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5934436Z outputs = layer_module( 2025-08-14T22:05:35.5934755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5934842Z outputs = self.rel_attn( 2025-08-14T22:05:35.5935165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5935290Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5935303Z 2025-08-14T22:05:35.5935429Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5935685Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5935768Z return mod(**inputs) 2025-08-14T22:05:35.5936102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5936273Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5936639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5936750Z outputs = layer_module( 2025-08-14T22:05:35.5937071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5937167Z outputs = self.rel_attn( 2025-08-14T22:05:35.5937485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5937579Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5937957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5938129Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5938143Z 2025-08-14T22:05:35.5938272Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5938530Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5938613Z return mod(**inputs) 2025-08-14T22:05:35.5938942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5939042Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5939358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5939444Z outputs = layer_module( 2025-08-14T22:05:35.5939768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5939890Z outputs = self.rel_attn( 2025-08-14T22:05:35.5940206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5940373Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5940386Z 2025-08-14T22:05:35.5940520Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5940768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5940851Z return mod(**inputs) 2025-08-14T22:05:35.5941233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5941332Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5941660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5941744Z outputs = layer_module( 2025-08-14T22:05:35.5942063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5942156Z outputs = self.rel_attn( 2025-08-14T22:05:35.5942475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5942566Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5942966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5943131Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5943143Z 2025-08-14T22:05:35.5943273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5943521Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5943604Z return mod(**inputs) 2025-08-14T22:05:35.5943937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5944036Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5944377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5944461Z outputs = layer_module( 2025-08-14T22:05:35.5944777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5944867Z outputs = self.rel_attn( 2025-08-14T22:05:35.5945188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5945312Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5945331Z 2025-08-14T22:05:35.5945456Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5945723Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5945810Z return mod(**inputs) 2025-08-14T22:05:35.5946130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5946232Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5946553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5946635Z outputs = layer_module( 2025-08-14T22:05:35.5946956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5947041Z outputs = self.rel_attn( 2025-08-14T22:05:35.5947364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5947455Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5947813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5947971Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5947986Z 2025-08-14T22:05:35.5948111Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5948364Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5948444Z return mod(**inputs) 2025-08-14T22:05:35.5949150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5949318Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5949637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5949718Z outputs = layer_module( 2025-08-14T22:05:35.5950046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5950131Z outputs = self.rel_attn( 2025-08-14T22:05:35.5950454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5950564Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5955068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5955220Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5955233Z 2025-08-14T22:05:35.5955359Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5955616Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5955695Z return mod(**inputs) 2025-08-14T22:05:35.5956018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5956128Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5956485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5956569Z outputs = layer_module( 2025-08-14T22:05:35.5956896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5956979Z outputs = self.rel_attn( 2025-08-14T22:05:35.5957301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5957409Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5957755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5957925Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5957940Z 2025-08-14T22:05:35.5958038Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5958169Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5958417Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5958496Z return mod(**inputs) 2025-08-14T22:05:35.5958822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5958922Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5959239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5959327Z outputs = layer_module( 2025-08-14T22:05:35.5959646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5959941Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5960271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5960365Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5960692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5960784Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5961128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5961326Z output = self.activation_function(output) 2025-08-14T22:05:35.5961600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5961689Z return self.act(input) 2025-08-14T22:05:35.5961704Z 2025-08-14T22:05:35.5961803Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5961927Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5962179Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5962261Z return mod(**inputs) 2025-08-14T22:05:35.5962587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5962689Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5963010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5963098Z outputs = layer_module( 2025-08-14T22:05:35.5963414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5963497Z outputs = self.rel_attn( 2025-08-14T22:05:35.5963820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5963942Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5963955Z 2025-08-14T22:05:35.5964112Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5964361Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5964440Z return mod(**inputs) 2025-08-14T22:05:35.5964764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5964867Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5965258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5965343Z outputs = layer_module( 2025-08-14T22:05:35.5965727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5965819Z outputs = self.rel_attn( 2025-08-14T22:05:35.5966137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5966260Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5966272Z 2025-08-14T22:05:35.5966403Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5966649Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5966737Z return mod(**inputs) 2025-08-14T22:05:35.5967055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5967154Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5967482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5967587Z outputs = layer_module( 2025-08-14T22:05:35.5967901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5967991Z outputs = self.rel_attn( 2025-08-14T22:05:35.5968303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5968398Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5968736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5968919Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5968931Z 2025-08-14T22:05:35.5969061Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5969306Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5969393Z return mod(**inputs) 2025-08-14T22:05:35.5969733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5969857Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5970182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5970261Z outputs = layer_module( 2025-08-14T22:05:35.5970573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5970663Z outputs = self.rel_attn( 2025-08-14T22:05:35.5970980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.5971145Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.5971157Z 2025-08-14T22:05:35.5971284Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5971529Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5971616Z return mod(**inputs) 2025-08-14T22:05:35.5971957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5972063Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5972378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5972461Z outputs = layer_module( 2025-08-14T22:05:35.5972782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5972865Z outputs = self.rel_attn( 2025-08-14T22:05:35.5973197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5973293Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5973632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.5973802Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.5973814Z 2025-08-14T22:05:35.5973938Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5974184Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5974273Z return mod(**inputs) 2025-08-14T22:05:35.5974591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5974694Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5975011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5975129Z outputs = layer_module( 2025-08-14T22:05:35.5975450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5975534Z outputs = self.rel_attn( 2025-08-14T22:05:35.5975848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.5975980Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.5975993Z 2025-08-14T22:05:35.5976117Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5976390Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5976468Z return mod(**inputs) 2025-08-14T22:05:35.5976785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5976891Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5977211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5977298Z outputs = layer_module( 2025-08-14T22:05:35.5977616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5977703Z outputs = self.rel_attn( 2025-08-14T22:05:35.5978020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5978123Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5978458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.5978616Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.5978628Z 2025-08-14T22:05:35.5978753Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5979002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5979089Z return mod(**inputs) 2025-08-14T22:05:35.5979431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5979530Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5984097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5984188Z outputs = layer_module( 2025-08-14T22:05:35.5984514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5984599Z outputs = self.rel_attn( 2025-08-14T22:05:35.5984912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5985047Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5985393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5985535Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5985547Z 2025-08-14T22:05:35.5985671Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5985915Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5986002Z return mod(**inputs) 2025-08-14T22:05:35.5986323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5986423Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5986744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5986825Z outputs = layer_module( 2025-08-14T22:05:35.5987171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5987257Z outputs = self.rel_attn( 2025-08-14T22:05:35.5987571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.5987683Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.5988026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.5988186Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.5988199Z 2025-08-14T22:05:35.5988295Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5988417Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5988673Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5988753Z return mod(**inputs) 2025-08-14T22:05:35.5989074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5989178Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5989496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5989579Z outputs = layer_module( 2025-08-14T22:05:35.5989894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.5990161Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.5990499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.5990595Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.5990920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.5991009Z output_x = self.ff(output_x) 2025-08-14T22:05:35.5991341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.5991455Z output = self.activation_function(output) 2025-08-14T22:05:35.5991722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.5991809Z return self.act(input) 2025-08-14T22:05:35.5991821Z 2025-08-14T22:05:35.5991931Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.5992056Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5992305Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5992384Z return mod(**inputs) 2025-08-14T22:05:35.5992723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5992831Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5993150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5993230Z outputs = layer_module( 2025-08-14T22:05:35.5993549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5993631Z outputs = self.rel_attn( 2025-08-14T22:05:35.5993956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.5994076Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.5994089Z 2025-08-14T22:05:35.5994279Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5994538Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5994684Z return mod(**inputs) 2025-08-14T22:05:35.5995012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5995110Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5995428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5995515Z outputs = layer_module( 2025-08-14T22:05:35.5995875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5995958Z outputs = self.rel_attn( 2025-08-14T22:05:35.5996274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.5996402Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.5996416Z 2025-08-14T22:05:35.5996545Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5996789Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5996869Z return mod(**inputs) 2025-08-14T22:05:35.5997196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5997293Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.5997611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.5997700Z outputs = layer_module( 2025-08-14T22:05:35.5998013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.5998102Z outputs = self.rel_attn( 2025-08-14T22:05:35.5998416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.5998507Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.5998931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.5999097Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.5999110Z 2025-08-14T22:05:35.5999239Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.5999484Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.5999567Z return mod(**inputs) 2025-08-14T22:05:35.5999891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.5999989Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6000328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6000415Z outputs = layer_module( 2025-08-14T22:05:35.6000733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6000822Z outputs = self.rel_attn( 2025-08-14T22:05:35.6001187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.6001367Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.6001382Z 2025-08-14T22:05:35.6001515Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6001760Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6001842Z return mod(**inputs) 2025-08-14T22:05:35.6002163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6002284Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6002604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6002688Z outputs = layer_module( 2025-08-14T22:05:35.6003002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6003090Z outputs = self.rel_attn( 2025-08-14T22:05:35.6003407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6003523Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6003860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.6004022Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.6004037Z 2025-08-14T22:05:35.6004369Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6004614Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6004699Z return mod(**inputs) 2025-08-14T22:05:35.6005018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6005117Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6005439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6005523Z outputs = layer_module( 2025-08-14T22:05:35.6005838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6005924Z outputs = self.rel_attn( 2025-08-14T22:05:35.6006240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.6006367Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.6006380Z 2025-08-14T22:05:35.6006526Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6006775Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6006858Z return mod(**inputs) 2025-08-14T22:05:35.6007177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6007283Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6007600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6007681Z outputs = layer_module( 2025-08-14T22:05:35.6008031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6008118Z outputs = self.rel_attn( 2025-08-14T22:05:35.6008430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6008524Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6017361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.6017551Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.6017565Z 2025-08-14T22:05:35.6017712Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6018034Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6018123Z return mod(**inputs) 2025-08-14T22:05:35.6018562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6018685Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6019080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6019164Z outputs = layer_module( 2025-08-14T22:05:35.6019485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6019568Z outputs = self.rel_attn( 2025-08-14T22:05:35.6019884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6020025Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6020367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6020513Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6020525Z 2025-08-14T22:05:35.6020652Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6020897Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6020984Z return mod(**inputs) 2025-08-14T22:05:35.6021304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6021412Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6021728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6021817Z outputs = layer_module( 2025-08-14T22:05:35.6022135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6022217Z outputs = self.rel_attn( 2025-08-14T22:05:35.6022533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6022648Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6022991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6023153Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6025286Z 2025-08-14T22:05:35.6025392Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6025521Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6025770Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6025854Z return mod(**inputs) 2025-08-14T22:05:35.6026177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6026288Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6026632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6026722Z outputs = layer_module( 2025-08-14T22:05:35.6027043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.6027304Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.6027635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.6027749Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.6028103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.6028192Z output_x = self.ff(output_x) 2025-08-14T22:05:35.6028507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.6028645Z output = self.activation_function(output) 2025-08-14T22:05:35.6028915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.6029004Z return self.act(input) 2025-08-14T22:05:35.6029022Z 2025-08-14T22:05:35.6029120Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6029247Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6029499Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6029601Z return mod(**inputs) 2025-08-14T22:05:35.6029921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6030030Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6030348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6030441Z outputs = layer_module( 2025-08-14T22:05:35.6030756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6030838Z outputs = self.rel_attn( 2025-08-14T22:05:35.6031159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.6031279Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.6031292Z 2025-08-14T22:05:35.6031417Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6031667Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6031747Z return mod(**inputs) 2025-08-14T22:05:35.6032069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6032169Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6032488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6032578Z outputs = layer_module( 2025-08-14T22:05:35.6032916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6033001Z outputs = self.rel_attn( 2025-08-14T22:05:35.6033322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.6033446Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.6033458Z 2025-08-14T22:05:35.6033591Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6033835Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6033913Z return mod(**inputs) 2025-08-14T22:05:35.6034254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6034356Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6034680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6034763Z outputs = layer_module( 2025-08-14T22:05:35.6035076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6035164Z outputs = self.rel_attn( 2025-08-14T22:05:35.6035478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6035567Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6035904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.6036069Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.6036105Z 2025-08-14T22:05:35.6036236Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6036481Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6036563Z return mod(**inputs) 2025-08-14T22:05:35.6036884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6036983Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6037328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6037416Z outputs = layer_module( 2025-08-14T22:05:35.6037803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6037891Z outputs = self.rel_attn( 2025-08-14T22:05:35.6038260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.6038427Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.6038448Z 2025-08-14T22:05:35.6038573Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6038819Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6038903Z return mod(**inputs) 2025-08-14T22:05:35.6039223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6039329Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6039660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6039741Z outputs = layer_module( 2025-08-14T22:05:35.6040064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6040149Z outputs = self.rel_attn( 2025-08-14T22:05:35.6040486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6040582Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6040922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.6041081Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.6041095Z 2025-08-14T22:05:35.6041310Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6041558Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6041644Z return mod(**inputs) 2025-08-14T22:05:35.6041986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6042088Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6042415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6042496Z outputs = layer_module( 2025-08-14T22:05:35.6042820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6042905Z outputs = self.rel_attn( 2025-08-14T22:05:35.6043219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.6043350Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.6043363Z 2025-08-14T22:05:35.6043488Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6043733Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6043842Z return mod(**inputs) 2025-08-14T22:05:35.6044161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6044269Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6044583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6044663Z outputs = layer_module( 2025-08-14T22:05:35.6044983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6045093Z outputs = self.rel_attn( 2025-08-14T22:05:35.6045411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6045505Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6045843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.6046000Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.6046013Z 2025-08-14T22:05:35.6046140Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6046386Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6046470Z return mod(**inputs) 2025-08-14T22:05:35.6046789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6046895Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6047213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6047296Z outputs = layer_module( 2025-08-14T22:05:35.6047614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6047699Z outputs = self.rel_attn( 2025-08-14T22:05:35.6048049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6048161Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6048503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6048643Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6048659Z 2025-08-14T22:05:35.6049145Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6049395Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6049482Z return mod(**inputs) 2025-08-14T22:05:35.6049853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6049966Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6050280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6050367Z outputs = layer_module( 2025-08-14T22:05:35.6050692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6050777Z outputs = self.rel_attn( 2025-08-14T22:05:35.6051092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6051207Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6051550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6051690Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6051705Z 2025-08-14T22:05:35.6051830Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6051957Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6056393Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6056478Z return mod(**inputs) 2025-08-14T22:05:35.6056850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6056952Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6057303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6057390Z outputs = layer_module( 2025-08-14T22:05:35.6057706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.6057971Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.6058308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.6058404Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.6058736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.6058828Z output_x = self.ff(output_x) 2025-08-14T22:05:35.6059142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.6059255Z output = self.activation_function(output) 2025-08-14T22:05:35.6059524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.6059611Z return self.act(input) 2025-08-14T22:05:35.6059632Z 2025-08-14T22:05:35.6059732Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6059861Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6060116Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6060226Z return mod(**inputs) 2025-08-14T22:05:35.6060546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6060655Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6061026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6061115Z outputs = layer_module( 2025-08-14T22:05:35.6061429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6061518Z outputs = self.rel_attn( 2025-08-14T22:05:35.6061861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.6061982Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.6061995Z 2025-08-14T22:05:35.6062121Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6062372Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6062453Z return mod(**inputs) 2025-08-14T22:05:35.6062779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6062883Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6063199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6063285Z outputs = layer_module( 2025-08-14T22:05:35.6063601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6063714Z outputs = self.rel_attn( 2025-08-14T22:05:35.6064028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.6064152Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.6064165Z 2025-08-14T22:05:35.6064294Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6064541Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6064620Z return mod(**inputs) 2025-08-14T22:05:35.6064964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6065064Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6065384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6065468Z outputs = layer_module( 2025-08-14T22:05:35.6065789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6065885Z outputs = self.rel_attn( 2025-08-14T22:05:35.6066203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6066300Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6066645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.6066881Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.6066895Z 2025-08-14T22:05:35.6067029Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6067324Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6067406Z return mod(**inputs) 2025-08-14T22:05:35.6067733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6067837Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6068190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6068273Z outputs = layer_module( 2025-08-14T22:05:35.6068594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6068690Z outputs = self.rel_attn( 2025-08-14T22:05:35.6069008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.6069172Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.6069192Z 2025-08-14T22:05:35.6069368Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6069639Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6069730Z return mod(**inputs) 2025-08-14T22:05:35.6070052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6070155Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6070478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6070562Z outputs = layer_module( 2025-08-14T22:05:35.6070891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6070977Z outputs = self.rel_attn( 2025-08-14T22:05:35.6071291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6071389Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6071755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.6071918Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.6071939Z 2025-08-14T22:05:35.6072066Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6072312Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6072405Z return mod(**inputs) 2025-08-14T22:05:35.6072722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6072847Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6073171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6073251Z outputs = layer_module( 2025-08-14T22:05:35.6073578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6073661Z outputs = self.rel_attn( 2025-08-14T22:05:35.6073979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.6074110Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.6074123Z 2025-08-14T22:05:35.6074248Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6074494Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6074586Z return mod(**inputs) 2025-08-14T22:05:35.6074905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6075009Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6075326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6075410Z outputs = layer_module( 2025-08-14T22:05:35.6075752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6075840Z outputs = self.rel_attn( 2025-08-14T22:05:35.6076162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6076251Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6076589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.6076747Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.6076760Z 2025-08-14T22:05:35.6076888Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6077158Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6077246Z return mod(**inputs) 2025-08-14T22:05:35.6077567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6077672Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6077989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6078070Z outputs = layer_module( 2025-08-14T22:05:35.6078394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6078481Z outputs = self.rel_attn( 2025-08-14T22:05:35.6078792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6078905Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6079250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6079416Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6079429Z 2025-08-14T22:05:35.6079556Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6079804Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6079888Z return mod(**inputs) 2025-08-14T22:05:35.6080208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6080345Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6080658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6080739Z outputs = layer_module( 2025-08-14T22:05:35.6081059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6085498Z outputs = self.rel_attn( 2025-08-14T22:05:35.6085823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6085941Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6086283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6086428Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6086443Z 2025-08-14T22:05:35.6086541Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6086667Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6086918Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6086999Z return mod(**inputs) 2025-08-14T22:05:35.6087322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6087422Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6087765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6087855Z outputs = layer_module( 2025-08-14T22:05:35.6088167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.6088431Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.6088764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.6088859Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.6089202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.6089293Z output_x = self.ff(output_x) 2025-08-14T22:05:35.6089610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.6089722Z output = self.activation_function(output) 2025-08-14T22:05:35.6089989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.6090082Z return self.act(input) 2025-08-14T22:05:35.6090095Z 2025-08-14T22:05:35.6090193Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6090320Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6090570Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6090650Z return mod(**inputs) 2025-08-14T22:05:35.6090972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6091120Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6091441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6091530Z outputs = layer_module( 2025-08-14T22:05:35.6091849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6091932Z outputs = self.rel_attn( 2025-08-14T22:05:35.6092254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.6092397Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.6092410Z 2025-08-14T22:05:35.6092536Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6092792Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6092876Z return mod(**inputs) 2025-08-14T22:05:35.6093198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6093299Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6093615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6093700Z outputs = layer_module( 2025-08-14T22:05:35.6094014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6094105Z outputs = self.rel_attn( 2025-08-14T22:05:35.6094417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.6094539Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.6094552Z 2025-08-14T22:05:35.6094681Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6094928Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6095007Z return mod(**inputs) 2025-08-14T22:05:35.6095358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6095460Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6095855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6095941Z outputs = layer_module( 2025-08-14T22:05:35.6096308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6096398Z outputs = self.rel_attn( 2025-08-14T22:05:35.6096737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6096838Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6097179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.6097343Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.6097356Z 2025-08-14T22:05:35.6097487Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6097730Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6097812Z return mod(**inputs) 2025-08-14T22:05:35.6098137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6098238Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6098559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6098660Z outputs = layer_module( 2025-08-14T22:05:35.6098974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6099067Z outputs = self.rel_attn( 2025-08-14T22:05:35.6099381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.6099542Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.6099559Z 2025-08-14T22:05:35.6099684Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6099954Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6100040Z return mod(**inputs) 2025-08-14T22:05:35.6100412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6100514Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6100834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6100916Z outputs = layer_module( 2025-08-14T22:05:35.6101237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6101320Z outputs = self.rel_attn( 2025-08-14T22:05:35.6101637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6101732Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6102070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.6102228Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.6102249Z 2025-08-14T22:05:35.6102376Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6102622Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6102704Z return mod(**inputs) 2025-08-14T22:05:35.6103046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6103148Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6103469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6103552Z outputs = layer_module( 2025-08-14T22:05:35.6103873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6103953Z outputs = self.rel_attn( 2025-08-14T22:05:35.6104267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.6104417Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.6104430Z 2025-08-14T22:05:35.6104557Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6104805Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6104891Z return mod(**inputs) 2025-08-14T22:05:35.6105208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6105313Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6105631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6105712Z outputs = layer_module( 2025-08-14T22:05:35.6106032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6106118Z outputs = self.rel_attn( 2025-08-14T22:05:35.6106460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6106548Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6106886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.6107043Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.6107055Z 2025-08-14T22:05:35.6107181Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6107453Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6107537Z return mod(**inputs) 2025-08-14T22:05:35.6107854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6107960Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6108279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6108364Z outputs = layer_module( 2025-08-14T22:05:35.6108689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6108771Z outputs = self.rel_attn( 2025-08-14T22:05:35.6109091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6109198Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6109542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6109685Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6109698Z 2025-08-14T22:05:35.6109822Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6110071Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6110159Z return mod(**inputs) 2025-08-14T22:05:35.6114688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6114795Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6115114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6115196Z outputs = layer_module( 2025-08-14T22:05:35.6115519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6115602Z outputs = self.rel_attn( 2025-08-14T22:05:35.6115920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6116062Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6116407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6116560Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6116572Z 2025-08-14T22:05:35.6116670Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6116798Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6117054Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6117136Z return mod(**inputs) 2025-08-14T22:05:35.6117459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6117558Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6117874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6117991Z outputs = layer_module( 2025-08-14T22:05:35.6118307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.6118576Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.6118917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.6119012Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.6119358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.6119446Z output_x = self.ff(output_x) 2025-08-14T22:05:35.6119763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.6119876Z output = self.activation_function(output) 2025-08-14T22:05:35.6120144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.6120236Z return self.act(input) 2025-08-14T22:05:35.6120248Z 2025-08-14T22:05:35.6120347Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6120472Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6120721Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6120802Z return mod(**inputs) 2025-08-14T22:05:35.6121120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6121320Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6121643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6121730Z outputs = layer_module( 2025-08-14T22:05:35.6122048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6122133Z outputs = self.rel_attn( 2025-08-14T22:05:35.6122477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.6122598Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.6122611Z 2025-08-14T22:05:35.6122740Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6122989Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6123074Z return mod(**inputs) 2025-08-14T22:05:35.6123396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6123498Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6123840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6123933Z outputs = layer_module( 2025-08-14T22:05:35.6124250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6124340Z outputs = self.rel_attn( 2025-08-14T22:05:35.6124657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.6124848Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.6124863Z 2025-08-14T22:05:35.6124999Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6125296Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6125376Z return mod(**inputs) 2025-08-14T22:05:35.6125706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6125832Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6126153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6126237Z outputs = layer_module( 2025-08-14T22:05:35.6126557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6126646Z outputs = self.rel_attn( 2025-08-14T22:05:35.6126959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6127077Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6127415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.6127576Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.6127590Z 2025-08-14T22:05:35.6127725Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6127970Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6128050Z return mod(**inputs) 2025-08-14T22:05:35.6128376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6128475Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6128801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6128883Z outputs = layer_module( 2025-08-14T22:05:35.6129201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6129325Z outputs = self.rel_attn( 2025-08-14T22:05:35.6129667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.6129833Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.6129845Z 2025-08-14T22:05:35.6129995Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6130239Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6130323Z return mod(**inputs) 2025-08-14T22:05:35.6130641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6130741Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6131065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6131148Z outputs = layer_module( 2025-08-14T22:05:35.6131490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6131576Z outputs = self.rel_attn( 2025-08-14T22:05:35.6131892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6131990Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6132327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.6132486Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.6132505Z 2025-08-14T22:05:35.6132632Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6132877Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6132962Z return mod(**inputs) 2025-08-14T22:05:35.6133280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6133404Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6133725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6133808Z outputs = layer_module( 2025-08-14T22:05:35.6134131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6134214Z outputs = self.rel_attn( 2025-08-14T22:05:35.6134527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.6134679Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.6134691Z 2025-08-14T22:05:35.6134814Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6135064Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6135148Z return mod(**inputs) 2025-08-14T22:05:35.6135468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6135576Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6135895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6135979Z outputs = layer_module( 2025-08-14T22:05:35.6136297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6136381Z outputs = self.rel_attn( 2025-08-14T22:05:35.6136701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6136792Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6137135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.6137298Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.6137311Z 2025-08-14T22:05:35.6137435Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6137703Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6137788Z return mod(**inputs) 2025-08-14T22:05:35.6138105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6138209Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6138525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6138607Z outputs = layer_module( 2025-08-14T22:05:35.6138926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6139030Z outputs = self.rel_attn( 2025-08-14T22:05:35.6143594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6143709Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6144052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6144199Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6144212Z 2025-08-14T22:05:35.6144337Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6144586Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6144674Z return mod(**inputs) 2025-08-14T22:05:35.6144992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6145098Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6145444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6145525Z outputs = layer_module( 2025-08-14T22:05:35.6145849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6145932Z outputs = self.rel_attn( 2025-08-14T22:05:35.6146252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6146386Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6146729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6146869Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6146881Z 2025-08-14T22:05:35.6146982Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6147109Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6147367Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6147448Z return mod(**inputs) 2025-08-14T22:05:35.6147772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6147872Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6148187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6148277Z outputs = layer_module( 2025-08-14T22:05:35.6148593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.6149213Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.6149553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.6149650Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.6150021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.6150111Z output_x = self.ff(output_x) 2025-08-14T22:05:35.6150431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.6150548Z output = self.activation_function(output) 2025-08-14T22:05:35.6150815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.6150905Z return self.act(input) 2025-08-14T22:05:35.6150918Z 2025-08-14T22:05:35.6151017Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6151181Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6151438Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6151519Z return mod(**inputs) 2025-08-14T22:05:35.6151840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6151943Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6152260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6152350Z outputs = layer_module( 2025-08-14T22:05:35.6152666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6152750Z outputs = self.rel_attn( 2025-08-14T22:05:35.6153075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.6153197Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.6153239Z 2025-08-14T22:05:35.6153371Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6153619Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6153770Z return mod(**inputs) 2025-08-14T22:05:35.6154112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6154244Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6154602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6154689Z outputs = layer_module( 2025-08-14T22:05:35.6155003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6155091Z outputs = self.rel_attn( 2025-08-14T22:05:35.6155409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.6155533Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.6155546Z 2025-08-14T22:05:35.6155679Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6155925Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6156006Z return mod(**inputs) 2025-08-14T22:05:35.6156333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6156434Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6156758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6156841Z outputs = layer_module( 2025-08-14T22:05:35.6157157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6157247Z outputs = self.rel_attn( 2025-08-14T22:05:35.6157583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6157682Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6158018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.6158181Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.6158198Z 2025-08-14T22:05:35.6158379Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6158625Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6158703Z return mod(**inputs) 2025-08-14T22:05:35.6159048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6159151Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6159473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6159554Z outputs = layer_module( 2025-08-14T22:05:35.6159870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6159962Z outputs = self.rel_attn( 2025-08-14T22:05:35.6160279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.6160453Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.6160466Z 2025-08-14T22:05:35.6160600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6160850Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6160961Z return mod(**inputs) 2025-08-14T22:05:35.6161370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6161472Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6161798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6161880Z outputs = layer_module( 2025-08-14T22:05:35.6162201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6162311Z outputs = self.rel_attn( 2025-08-14T22:05:35.6162628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6162728Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6163068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.6163243Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.6163255Z 2025-08-14T22:05:35.6163382Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6163627Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6163710Z return mod(**inputs) 2025-08-14T22:05:35.6164031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6164133Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6164458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6164539Z outputs = layer_module( 2025-08-14T22:05:35.6164865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6164948Z outputs = self.rel_attn( 2025-08-14T22:05:35.6165286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.6165412Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.6165425Z 2025-08-14T22:05:35.6165550Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6165800Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6165881Z return mod(**inputs) 2025-08-14T22:05:35.6166198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6166303Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6166637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6166720Z outputs = layer_module( 2025-08-14T22:05:35.6167038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6167121Z outputs = self.rel_attn( 2025-08-14T22:05:35.6167444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6167533Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6167869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.6168026Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.6168038Z 2025-08-14T22:05:35.6176419Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6176760Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6176846Z return mod(**inputs) 2025-08-14T22:05:35.6177312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6177421Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6177739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6177821Z outputs = layer_module( 2025-08-14T22:05:35.6178139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6178244Z outputs = self.rel_attn( 2025-08-14T22:05:35.6178562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6178669Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6179010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6179154Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6179167Z 2025-08-14T22:05:35.6179293Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6179539Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6179624Z return mod(**inputs) 2025-08-14T22:05:35.6179947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6180055Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6180372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6180455Z outputs = layer_module( 2025-08-14T22:05:35.6180780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6180867Z outputs = self.rel_attn( 2025-08-14T22:05:35.6181188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6181328Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6181669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6181811Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6181823Z 2025-08-14T22:05:35.6181924Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6182049Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6182301Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6182381Z return mod(**inputs) 2025-08-14T22:05:35.6184926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6185034Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6185353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6185440Z outputs = layer_module( 2025-08-14T22:05:35.6185754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.6186025Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.6186356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.6186449Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.6186777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.6186891Z output_x = self.ff(output_x) 2025-08-14T22:05:35.6187222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.6187369Z output = self.activation_function(output) 2025-08-14T22:05:35.6187637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.6187726Z return self.act(input) 2025-08-14T22:05:35.6187739Z 2025-08-14T22:05:35.6187835Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6187986Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6188235Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6188314Z return mod(**inputs) 2025-08-14T22:05:35.6188630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6188737Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6189055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6189141Z outputs = layer_module( 2025-08-14T22:05:35.6189461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6189546Z outputs = self.rel_attn( 2025-08-14T22:05:35.6189868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.6189988Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.6190001Z 2025-08-14T22:05:35.6190136Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6190382Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6190462Z return mod(**inputs) 2025-08-14T22:05:35.6190787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6190891Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6191236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6191325Z outputs = layer_module( 2025-08-14T22:05:35.6191640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6191731Z outputs = self.rel_attn( 2025-08-14T22:05:35.6192044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.6192166Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.6192179Z 2025-08-14T22:05:35.6192308Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6192573Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6192660Z return mod(**inputs) 2025-08-14T22:05:35.6192979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6193077Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6193400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6193482Z outputs = layer_module( 2025-08-14T22:05:35.6193804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6193895Z outputs = self.rel_attn( 2025-08-14T22:05:35.6194208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6194304Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6194666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.6194829Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.6194841Z 2025-08-14T22:05:35.6194974Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6195222Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6195308Z return mod(**inputs) 2025-08-14T22:05:35.6195629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6195751Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6196073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6196156Z outputs = layer_module( 2025-08-14T22:05:35.6196474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6196563Z outputs = self.rel_attn( 2025-08-14T22:05:35.6196879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.6197049Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.6197061Z 2025-08-14T22:05:35.6197255Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6197504Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6197608Z return mod(**inputs) 2025-08-14T22:05:35.6197966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6198065Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6198389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6198473Z outputs = layer_module( 2025-08-14T22:05:35.6198813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6198896Z outputs = self.rel_attn( 2025-08-14T22:05:35.6199211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6199306Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6199648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.6199813Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.6199825Z 2025-08-14T22:05:35.6199951Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6200217Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6200305Z return mod(**inputs) 2025-08-14T22:05:35.6200624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6200721Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6201045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6201129Z outputs = layer_module( 2025-08-14T22:05:35.6201516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6201602Z outputs = self.rel_attn( 2025-08-14T22:05:35.6201919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.6202050Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.6202085Z 2025-08-14T22:05:35.6202213Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6202467Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6202549Z return mod(**inputs) 2025-08-14T22:05:35.6202867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6202975Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6203291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6203396Z outputs = layer_module( 2025-08-14T22:05:35.6203719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6203804Z outputs = self.rel_attn( 2025-08-14T22:05:35.6204124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6204215Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6204554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.6204712Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.6204724Z 2025-08-14T22:05:35.6204848Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6205100Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6205183Z return mod(**inputs) 2025-08-14T22:05:35.6205503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6205613Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6205930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6206015Z outputs = layer_module( 2025-08-14T22:05:35.6206356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6206439Z outputs = self.rel_attn( 2025-08-14T22:05:35.6206759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6206867Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6207207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6207352Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6207364Z 2025-08-14T22:05:35.6207489Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6207759Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6207846Z return mod(**inputs) 2025-08-14T22:05:35.6208166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6208275Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6208593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6208676Z outputs = layer_module( 2025-08-14T22:05:35.6208996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6209082Z outputs = self.rel_attn( 2025-08-14T22:05:35.6209405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6209512Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6209853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6210016Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6210028Z 2025-08-14T22:05:35.6210126Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6210252Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6210503Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6210583Z return mod(**inputs) 2025-08-14T22:05:35.6210908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6211031Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6211350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6211437Z outputs = layer_module( 2025-08-14T22:05:35.6216013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.6216322Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.6216652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.6216747Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.6217071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.6217161Z output_x = self.ff(output_x) 2025-08-14T22:05:35.6217481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.6217589Z output = self.activation_function(output) 2025-08-14T22:05:35.6217858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.6217950Z return self.act(input) 2025-08-14T22:05:35.6217963Z 2025-08-14T22:05:35.6218060Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6218211Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6218463Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6218545Z return mod(**inputs) 2025-08-14T22:05:35.6218865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6218972Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6219288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6219381Z outputs = layer_module( 2025-08-14T22:05:35.6219719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6219807Z outputs = self.rel_attn( 2025-08-14T22:05:35.6220130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.6220247Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.6220259Z 2025-08-14T22:05:35.6220391Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6220684Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6220767Z return mod(**inputs) 2025-08-14T22:05:35.6221091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6221191Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6221507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6221616Z outputs = layer_module( 2025-08-14T22:05:35.6221931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6222020Z outputs = self.rel_attn( 2025-08-14T22:05:35.6222334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.6222456Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.6222469Z 2025-08-14T22:05:35.6222598Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6222876Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6222959Z return mod(**inputs) 2025-08-14T22:05:35.6223277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6223377Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6223701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6223783Z outputs = layer_module( 2025-08-14T22:05:35.6224103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6224192Z outputs = self.rel_attn( 2025-08-14T22:05:35.6224507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6224606Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6224949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.6225113Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.6225126Z 2025-08-14T22:05:35.6225260Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6225505Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6225592Z return mod(**inputs) 2025-08-14T22:05:35.6225934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6226036Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6226431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6226516Z outputs = layer_module( 2025-08-14T22:05:35.6226886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6226976Z outputs = self.rel_attn( 2025-08-14T22:05:35.6227289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.6227482Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.6227497Z 2025-08-14T22:05:35.6227625Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6227874Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6227959Z return mod(**inputs) 2025-08-14T22:05:35.6228275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6228384Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6228736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6228843Z outputs = layer_module( 2025-08-14T22:05:35.6229168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6229253Z outputs = self.rel_attn( 2025-08-14T22:05:35.6229590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6229686Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6230026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.6230192Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.6230205Z 2025-08-14T22:05:35.6230329Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6230600Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6230687Z return mod(**inputs) 2025-08-14T22:05:35.6231005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6231102Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6231426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6231508Z outputs = layer_module( 2025-08-14T22:05:35.6231831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6231913Z outputs = self.rel_attn( 2025-08-14T22:05:35.6239494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.6239644Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.6239666Z 2025-08-14T22:05:35.6239808Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6240073Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6240159Z return mod(**inputs) 2025-08-14T22:05:35.6240506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6240620Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6245405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6245501Z outputs = layer_module( 2025-08-14T22:05:35.6245821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6245913Z outputs = self.rel_attn( 2025-08-14T22:05:35.6246232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6246320Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6246668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.6246870Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.6246886Z 2025-08-14T22:05:35.6247024Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6247283Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6247366Z return mod(**inputs) 2025-08-14T22:05:35.6247697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6247800Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6248118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6248207Z outputs = layer_module( 2025-08-14T22:05:35.6248522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6248609Z outputs = self.rel_attn( 2025-08-14T22:05:35.6249328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6249498Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6249850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6249992Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6250005Z 2025-08-14T22:05:35.6250137Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6250388Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6250832Z return mod(**inputs) 2025-08-14T22:05:35.6251158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6251260Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6251582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6251669Z outputs = layer_module( 2025-08-14T22:05:35.6251985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6252071Z outputs = self.rel_attn( 2025-08-14T22:05:35.6252385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6252490Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6252943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6253132Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6253149Z 2025-08-14T22:05:35.6253308Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6253483Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6253858Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6253984Z return mod(**inputs) 2025-08-14T22:05:35.6254480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6254589Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6254913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6254997Z outputs = layer_module( 2025-08-14T22:05:35.6255416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.6255740Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.6256105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.6256208Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.6256528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.6256623Z output_x = self.ff(output_x) 2025-08-14T22:05:35.6256938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.6257042Z output = self.activation_function(output) 2025-08-14T22:05:35.6257315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.6257402Z return self.act(input) 2025-08-14T22:05:35.6257415Z 2025-08-14T22:05:35.6257518Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6257653Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6257902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6258012Z return mod(**inputs) 2025-08-14T22:05:35.6258338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6258439Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6258765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6258848Z outputs = layer_module( 2025-08-14T22:05:35.6259168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6259282Z outputs = self.rel_attn( 2025-08-14T22:05:35.6259598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:05:35.6259762Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:05:35.6259795Z 2025-08-14T22:05:35.6259929Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6260177Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6260266Z return mod(**inputs) 2025-08-14T22:05:35.6260583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6260690Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6261010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6261095Z outputs = layer_module( 2025-08-14T22:05:35.6261415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6261499Z outputs = self.rel_attn( 2025-08-14T22:05:35.6261815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:05:35.6261946Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:05:35.6261959Z 2025-08-14T22:05:35.6262089Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6262368Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6262451Z return mod(**inputs) 2025-08-14T22:05:35.6262772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6262887Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6263206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6263291Z outputs = layer_module( 2025-08-14T22:05:35.6263634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6263726Z outputs = self.rel_attn( 2025-08-14T22:05:35.6264052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6264146Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6264488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:05:35.6264661Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:05:35.6264675Z 2025-08-14T22:05:35.6264803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6265055Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6265139Z return mod(**inputs) 2025-08-14T22:05:35.6265460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6265572Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6265927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6266023Z outputs = layer_module( 2025-08-14T22:05:35.6266519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6266634Z outputs = self.rel_attn( 2025-08-14T22:05:35.6267116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:05:35.6267386Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:05:35.6267405Z 2025-08-14T22:05:35.6267595Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6267996Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6268080Z return mod(**inputs) 2025-08-14T22:05:35.6268409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6268508Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6268829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6268918Z outputs = layer_module( 2025-08-14T22:05:35.6269235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6269320Z outputs = self.rel_attn( 2025-08-14T22:05:35.6269640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6273854Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6274209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:05:35.6274374Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:05:35.6274386Z 2025-08-14T22:05:35.6274515Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6274804Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6274886Z return mod(**inputs) 2025-08-14T22:05:35.6275216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6275315Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6275635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6275724Z outputs = layer_module( 2025-08-14T22:05:35.6276042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6276151Z outputs = self.rel_attn( 2025-08-14T22:05:35.6276476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:05:35.6276603Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:05:35.6276616Z 2025-08-14T22:05:35.6276744Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6276991Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6277071Z return mod(**inputs) 2025-08-14T22:05:35.6277395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6277495Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6277821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6277905Z outputs = layer_module( 2025-08-14T22:05:35.6278247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6278337Z outputs = self.rel_attn( 2025-08-14T22:05:35.6278657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:05:35.6278749Z attn_vec = self.rel_attn_core( 2025-08-14T22:05:35.6279095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:05:35.6279272Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:05:35.6279285Z 2025-08-14T22:05:35.6279415Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6279662Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6279744Z return mod(**inputs) 2025-08-14T22:05:35.6280072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6280179Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6280498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6280588Z outputs = layer_module( 2025-08-14T22:05:35.6280903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6280993Z outputs = self.rel_attn( 2025-08-14T22:05:35.6281384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6281496Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6281850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6281993Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6282007Z 2025-08-14T22:05:35.6282143Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6282411Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6282495Z return mod(**inputs) 2025-08-14T22:05:35.6282825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6282926Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6283249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6283341Z outputs = layer_module( 2025-08-14T22:05:35.6283661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:05:35.6283785Z outputs = self.rel_attn( 2025-08-14T22:05:35.6284110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:05:35.6284293Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:05:35.6284694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:05:35.6284834Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:05:35.6284847Z 2025-08-14T22:05:35.6284950Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6285080Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6285325Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6285414Z return mod(**inputs) 2025-08-14T22:05:35.6285733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:05:35.6285838Z transformer_outputs = self.transformer( 2025-08-14T22:05:35.6286191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:05:35.6286277Z outputs = layer_module( 2025-08-14T22:05:35.6286608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:05:35.6286877Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:05:35.6287211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:05:35.6287336Z return forward_fn(*input_tensors) 2025-08-14T22:05:35.6287656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:05:35.6287754Z output_x = self.ff(output_x) 2025-08-14T22:05:35.6288075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:05:35.6288187Z output = self.activation_function(output) 2025-08-14T22:05:35.6288465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:05:35.6288549Z return self.act(input) 2025-08-14T22:05:35.6288562Z 2025-08-14T22:05:35.6288659Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6288790Z cudagraph partition due to non gpu ops 2025-08-14T22:05:35.6288938Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:05:35.6289197Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:05:35.6289281Z return mod(**inputs) 2025-08-14T22:05:35.6289602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1630, in forward 2025-08-14T22:05:35.6289772Z loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1)) 2025-08-14T22:05:35.6289787Z 2025-08-14T22:05:47.0059648Z Compilation time (from dynamo_timed): 46.740226199 2025-08-14T22:05:47.0235336Z pass 2025-08-14T22:05:47.0235985Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:05:47.0237094Z TIMING: _recursive_pre_grad_passes:0.10592 _recursive_joint_graph_passes:1.95707 _recursive_post_grad_passes:0.30036 async_compile.wait:0.62312 code_gen:9.04661 inductor_compile:16.96647 backend_compile:38.35037 gc:0.00027 entire_frame_compile:46.74023 total_wall_time:46.74023 2025-08-14T22:05:47.0238257Z STATS: call_* op count: 818 | FakeTensorMode.__torch_dispatch__:91970 | FakeTensor.__torch_dispatch__:14519 | ProxyTorchDispatchMode.__torch_dispatch__:18687 2025-08-14T22:05:47.0238891Z Dynamo produced 1 graphs covering 818 ops with 0 graph breaks (0 unique) 2025-08-14T22:05:54.1456028Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T22:05:54.1457135Z from pkg_resources import resource_filename 2025-08-14T22:05:54.9696747Z 2025-08-14T22:05:57.2269096Z loading model: 0it [00:00, ?it/s] 2025-08-14T22:05:57.2269438Z loading model: 0it [00:02, ?it/s] 2025-08-14T22:05:57.2296310Z cpu eval YituTechConvBert 2025-08-14T22:05:59.1121820Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:05:59.7725170Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:06:00.4453103Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:06:24.5466735Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5467476Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5467744Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5467986Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5468311Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5468623Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5468911Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5469332Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5469951Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5470608Z return mod(**inputs) 2025-08-14T22:06:24.5471259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5471798Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5472332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5472843Z hidden_states = self.encoder( 2025-08-14T22:06:24.5473373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5473884Z layer_outputs = layer_module( 2025-08-14T22:06:24.5474366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5474857Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5475431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5475953Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5476638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5477251Z self_outputs = self.self( 2025-08-14T22:06:24.5477903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5478843Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5479778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:06:24.5480445Z x = self.depthwise(hidden_states) 2025-08-14T22:06:24.5480666Z 2025-08-14T22:06:24.5480854Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5481537Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5482102Z return mod(**inputs) 2025-08-14T22:06:24.5482567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5483156Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5484837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5489816Z hidden_states = self.encoder( 2025-08-14T22:06:24.5490325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5490831Z layer_outputs = layer_module( 2025-08-14T22:06:24.5491265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5491721Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5492393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5492912Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5493563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5494438Z self_outputs = self.self( 2025-08-14T22:06:24.5495109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5496013Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5496918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:06:24.5497723Z x = self.pointwise(x) 2025-08-14T22:06:24.5498002Z 2025-08-14T22:06:24.5498160Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5498495Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5498943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5499383Z return mod(**inputs) 2025-08-14T22:06:24.5500061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5500575Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5501088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5501586Z hidden_states = self.encoder( 2025-08-14T22:06:24.5502074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5502570Z layer_outputs = layer_module( 2025-08-14T22:06:24.5503014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5503476Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5504029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5504541Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5505052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5505549Z self_outputs = self.self( 2025-08-14T22:06:24.5506061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:06:24.5506619Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:06:24.5506850Z 2025-08-14T22:06:24.5506955Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5507212Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5507491Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5508069Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5508624Z return mod(**inputs) 2025-08-14T22:06:24.5509312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5510019Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5510653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5511308Z hidden_states = self.encoder( 2025-08-14T22:06:24.5511890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5512584Z layer_outputs = layer_module( 2025-08-14T22:06:24.5513117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5513676Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5514296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5519032Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5519586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5520077Z self_outputs = self.self( 2025-08-14T22:06:24.5520558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:06:24.5521222Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:06:24.5521458Z 2025-08-14T22:06:24.5521567Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5521882Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5522333Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5522800Z return mod(**inputs) 2025-08-14T22:06:24.5523299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5523950Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5524590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5525217Z hidden_states = self.encoder( 2025-08-14T22:06:24.5525844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5526457Z layer_outputs = layer_module( 2025-08-14T22:06:24.5527056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5527623Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5528221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5528879Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5529468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5529955Z self_outputs = self.self( 2025-08-14T22:06:24.5530468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:06:24.5531107Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:06:24.5531360Z 2025-08-14T22:06:24.5531475Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5531730Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5532026Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5532604Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5533009Z return mod(**inputs) 2025-08-14T22:06:24.5533493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5534255Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5534847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5535407Z hidden_states = self.encoder( 2025-08-14T22:06:24.5536054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5536700Z layer_outputs = layer_module( 2025-08-14T22:06:24.5537255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5537826Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5538396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:06:24.5538913Z layer_output = apply_chunking_to_forward( 2025-08-14T22:06:24.5539423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:06:24.5540047Z return forward_fn(*input_tensors) 2025-08-14T22:06:24.5540750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:06:24.5541625Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:06:24.5542508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:06:24.5543309Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:06:24.5548130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:06:24.5548561Z return self.act(input) 2025-08-14T22:06:24.5549038Z 2025-08-14T22:06:24.5549144Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5549412Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5549668Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5549912Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5550159Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5550653Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5550892Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5551140Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5551426Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5551880Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5552282Z return mod(**inputs) 2025-08-14T22:06:24.5552758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5553274Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5553781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5554283Z hidden_states = self.encoder( 2025-08-14T22:06:24.5554872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5555380Z layer_outputs = layer_module( 2025-08-14T22:06:24.5555810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5556266Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5556785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5557302Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5557859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5558430Z self_outputs = self.self( 2025-08-14T22:06:24.5558947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5559550Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5560161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:06:24.5560665Z x = self.depthwise(hidden_states) 2025-08-14T22:06:24.5560830Z 2025-08-14T22:06:24.5560966Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5561498Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5561910Z return mod(**inputs) 2025-08-14T22:06:24.5562380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5562894Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5563431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5563932Z hidden_states = self.encoder( 2025-08-14T22:06:24.5564419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5564908Z layer_outputs = layer_module( 2025-08-14T22:06:24.5565338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5565821Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5566320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5566826Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5567330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5567829Z self_outputs = self.self( 2025-08-14T22:06:24.5568299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5568910Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5569517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:06:24.5570015Z x = self.pointwise(x) 2025-08-14T22:06:24.5570154Z 2025-08-14T22:06:24.5570256Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5570543Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5570997Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5571407Z return mod(**inputs) 2025-08-14T22:06:24.5571872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5580647Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5581360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5582018Z hidden_states = self.encoder( 2025-08-14T22:06:24.5582653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5583315Z layer_outputs = layer_module( 2025-08-14T22:06:24.5583782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5584234Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5584739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5585287Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5585796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5586292Z self_outputs = self.self( 2025-08-14T22:06:24.5586824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:06:24.5589603Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:06:24.5589824Z 2025-08-14T22:06:24.5589924Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5590182Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5590468Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5590913Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5591309Z return mod(**inputs) 2025-08-14T22:06:24.5591777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5592313Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5592811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5593314Z hidden_states = self.encoder( 2025-08-14T22:06:24.5593798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5594294Z layer_outputs = layer_module( 2025-08-14T22:06:24.5594736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5595184Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5595681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5596184Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5596690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5597181Z self_outputs = self.self( 2025-08-14T22:06:24.5597664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:06:24.5598221Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:06:24.5598455Z 2025-08-14T22:06:24.5598552Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5598842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5599291Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5599688Z return mod(**inputs) 2025-08-14T22:06:24.5600156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5600673Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5601312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5601909Z hidden_states = self.encoder( 2025-08-14T22:06:24.5602400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5602897Z layer_outputs = layer_module( 2025-08-14T22:06:24.5603318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5603772Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5604281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5604787Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5605329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5605885Z self_outputs = self.self( 2025-08-14T22:06:24.5606365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:06:24.5606906Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:06:24.5607121Z 2025-08-14T22:06:24.5607219Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5607475Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5607761Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5608201Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5608606Z return mod(**inputs) 2025-08-14T22:06:24.5609084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5609592Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5610133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5610635Z hidden_states = self.encoder( 2025-08-14T22:06:24.5611122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5611613Z layer_outputs = layer_module( 2025-08-14T22:06:24.5612042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5612519Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5613018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:06:24.5613532Z layer_output = apply_chunking_to_forward( 2025-08-14T22:06:24.5614033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:06:24.5614526Z return forward_fn(*input_tensors) 2025-08-14T22:06:24.5615059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:06:24.5615717Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:06:24.5620520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:06:24.5621072Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:06:24.5621537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:06:24.5621964Z return self.act(input) 2025-08-14T22:06:24.5622103Z 2025-08-14T22:06:24.5622205Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5622457Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5622710Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5622955Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5623193Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5623480Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5623728Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5623979Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5624251Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5624703Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5625113Z return mod(**inputs) 2025-08-14T22:06:24.5625575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5626090Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5626646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5627151Z hidden_states = self.encoder( 2025-08-14T22:06:24.5627638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5628137Z layer_outputs = layer_module( 2025-08-14T22:06:24.5628567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5629007Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5629510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5630039Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5630659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5631156Z self_outputs = self.self( 2025-08-14T22:06:24.5631662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5632273Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5632883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:06:24.5633379Z x = self.depthwise(hidden_states) 2025-08-14T22:06:24.5633553Z 2025-08-14T22:06:24.5633683Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5634194Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5634595Z return mod(**inputs) 2025-08-14T22:06:24.5635066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5635578Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5636086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5636575Z hidden_states = self.encoder( 2025-08-14T22:06:24.5637073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5637576Z layer_outputs = layer_module( 2025-08-14T22:06:24.5638011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5638509Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5639011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5639524Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5640020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5640518Z self_outputs = self.self( 2025-08-14T22:06:24.5640997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5641704Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5642307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:06:24.5642796Z x = self.pointwise(x) 2025-08-14T22:06:24.5642937Z 2025-08-14T22:06:24.5643042Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5643332Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5643771Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5644179Z return mod(**inputs) 2025-08-14T22:06:24.5644720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5649365Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5649883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5650383Z hidden_states = self.encoder( 2025-08-14T22:06:24.5650883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5651382Z layer_outputs = layer_module( 2025-08-14T22:06:24.5651818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5652270Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5652770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5653284Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5653853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5654344Z self_outputs = self.self( 2025-08-14T22:06:24.5654816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:06:24.5655374Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:06:24.5655593Z 2025-08-14T22:06:24.5655705Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5655994Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5656272Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5656723Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5657128Z return mod(**inputs) 2025-08-14T22:06:24.5657598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5658119Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5658630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5659184Z hidden_states = self.encoder( 2025-08-14T22:06:24.5659742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5660237Z layer_outputs = layer_module( 2025-08-14T22:06:24.5660678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5661124Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5661625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5662146Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5662658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5663147Z self_outputs = self.self( 2025-08-14T22:06:24.5663656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:06:24.5664230Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:06:24.5664460Z 2025-08-14T22:06:24.5664563Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5664845Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5665287Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5665689Z return mod(**inputs) 2025-08-14T22:06:24.5666181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5666703Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5667219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5667718Z hidden_states = self.encoder( 2025-08-14T22:06:24.5668204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5668699Z layer_outputs = layer_module( 2025-08-14T22:06:24.5669126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5669570Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5670071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5670588Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5671096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5671606Z self_outputs = self.self( 2025-08-14T22:06:24.5672088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:06:24.5672635Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:06:24.5672847Z 2025-08-14T22:06:24.5672951Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5673197Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5673501Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5678592Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5678993Z return mod(**inputs) 2025-08-14T22:06:24.5679465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5680039Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5680540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5681094Z hidden_states = self.encoder( 2025-08-14T22:06:24.5681610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5682107Z layer_outputs = layer_module( 2025-08-14T22:06:24.5682529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5682985Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5683490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:06:24.5684005Z layer_output = apply_chunking_to_forward( 2025-08-14T22:06:24.5684501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:06:24.5685000Z return forward_fn(*input_tensors) 2025-08-14T22:06:24.5685567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:06:24.5686162Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:06:24.5686727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:06:24.5687280Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:06:24.5687755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:06:24.5688231Z return self.act(input) 2025-08-14T22:06:24.5688379Z 2025-08-14T22:06:24.5688544Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5688829Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5689080Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5689320Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5689565Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5689814Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5690051Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5690293Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5690573Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5691010Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5691415Z return mod(**inputs) 2025-08-14T22:06:24.5691890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5692405Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5692905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5693427Z hidden_states = self.encoder( 2025-08-14T22:06:24.5693911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5694399Z layer_outputs = layer_module( 2025-08-14T22:06:24.5694834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5695280Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5695778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5696318Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5696823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5697318Z self_outputs = self.self( 2025-08-14T22:06:24.5697788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5698393Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5699004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:06:24.5699510Z x = self.depthwise(hidden_states) 2025-08-14T22:06:24.5699674Z 2025-08-14T22:06:24.5699803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5700254Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5700665Z return mod(**inputs) 2025-08-14T22:06:24.5701134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5701641Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5702150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5702694Z hidden_states = self.encoder( 2025-08-14T22:06:24.5707433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5707932Z layer_outputs = layer_module( 2025-08-14T22:06:24.5708365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5708825Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5709329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5709842Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5710371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5710873Z self_outputs = self.self( 2025-08-14T22:06:24.5711348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5711952Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5712561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:06:24.5713048Z x = self.pointwise(x) 2025-08-14T22:06:24.5713197Z 2025-08-14T22:06:24.5713294Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5713591Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5714037Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5714433Z return mod(**inputs) 2025-08-14T22:06:24.5714901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5715431Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5715932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5716434Z hidden_states = self.encoder( 2025-08-14T22:06:24.5716918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5717534Z layer_outputs = layer_module( 2025-08-14T22:06:24.5717985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5718437Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5718934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5719446Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5719948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5720444Z self_outputs = self.self( 2025-08-14T22:06:24.5720918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:06:24.5721549Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:06:24.5721775Z 2025-08-14T22:06:24.5721874Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5722133Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5722422Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5722859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5723259Z return mod(**inputs) 2025-08-14T22:06:24.5723728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5724235Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5724772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5725279Z hidden_states = self.encoder( 2025-08-14T22:06:24.5725764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5726251Z layer_outputs = layer_module( 2025-08-14T22:06:24.5726684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5727140Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5727642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5728174Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5728679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5729173Z self_outputs = self.self( 2025-08-14T22:06:24.5729651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:06:24.5730219Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:06:24.5730460Z 2025-08-14T22:06:24.5730558Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5730852Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5731289Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5731746Z return mod(**inputs) 2025-08-14T22:06:24.5740728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5741435Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5742114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5742778Z hidden_states = self.encoder( 2025-08-14T22:06:24.5743365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5743853Z layer_outputs = layer_module( 2025-08-14T22:06:24.5744285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5744763Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5745268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5745769Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5746331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5749284Z self_outputs = self.self( 2025-08-14T22:06:24.5749768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:06:24.5750319Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:06:24.5750535Z 2025-08-14T22:06:24.5750635Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5750896Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5751183Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5751631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5752039Z return mod(**inputs) 2025-08-14T22:06:24.5752503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5753021Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5753533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5754095Z hidden_states = self.encoder( 2025-08-14T22:06:24.5754572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5755068Z layer_outputs = layer_module( 2025-08-14T22:06:24.5755492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5755935Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5756431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:06:24.5756944Z layer_output = apply_chunking_to_forward( 2025-08-14T22:06:24.5757496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:06:24.5757986Z return forward_fn(*input_tensors) 2025-08-14T22:06:24.5758524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:06:24.5759128Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:06:24.5759688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:06:24.5760229Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:06:24.5760755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:06:24.5761359Z return self.act(input) 2025-08-14T22:06:24.5761499Z 2025-08-14T22:06:24.5761598Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5761857Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5762149Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5762400Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5762638Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5762888Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5763132Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5763369Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5763654Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5764103Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5764531Z return mod(**inputs) 2025-08-14T22:06:24.5765058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5765577Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5766092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5766586Z hidden_states = self.encoder( 2025-08-14T22:06:24.5767078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5767577Z layer_outputs = layer_module( 2025-08-14T22:06:24.5783512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5784010Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5784541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5785100Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5785625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5786131Z self_outputs = self.self( 2025-08-14T22:06:24.5786625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5787332Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5787959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:06:24.5788462Z x = self.depthwise(hidden_states) 2025-08-14T22:06:24.5788637Z 2025-08-14T22:06:24.5788775Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5789241Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5789724Z return mod(**inputs) 2025-08-14T22:06:24.5790307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5790884Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5791403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5791899Z hidden_states = self.encoder( 2025-08-14T22:06:24.5792399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5792897Z layer_outputs = layer_module( 2025-08-14T22:06:24.5793330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5793788Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5794346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5794859Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5795365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5795893Z self_outputs = self.self( 2025-08-14T22:06:24.5796377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5796980Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5797586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:06:24.5798135Z x = self.pointwise(x) 2025-08-14T22:06:24.5798310Z 2025-08-14T22:06:24.5798418Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5798724Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5799169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5799584Z return mod(**inputs) 2025-08-14T22:06:24.5800060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5800568Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5801172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5801680Z hidden_states = self.encoder( 2025-08-14T22:06:24.5802169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5802660Z layer_outputs = layer_module( 2025-08-14T22:06:24.5803103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5803560Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5804098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5808829Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5809344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5809875Z self_outputs = self.self( 2025-08-14T22:06:24.5810353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:06:24.5810919Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:06:24.5811155Z 2025-08-14T22:06:24.5811260Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5811528Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5811810Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5812265Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5812677Z return mod(**inputs) 2025-08-14T22:06:24.5813164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5813688Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5814202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5814712Z hidden_states = self.encoder( 2025-08-14T22:06:24.5815191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5815691Z layer_outputs = layer_module( 2025-08-14T22:06:24.5816123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5816571Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5817073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5817589Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5818122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5818663Z self_outputs = self.self( 2025-08-14T22:06:24.5819222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:06:24.5819796Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:06:24.5820029Z 2025-08-14T22:06:24.5820140Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5820452Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5820903Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5821311Z return mod(**inputs) 2025-08-14T22:06:24.5821775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5822296Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5822812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5823318Z hidden_states = self.encoder( 2025-08-14T22:06:24.5823802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5824305Z layer_outputs = layer_module( 2025-08-14T22:06:24.5824739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5825202Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5825711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5826229Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5826738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5827234Z self_outputs = self.self( 2025-08-14T22:06:24.5827740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:06:24.5828295Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:06:24.5828508Z 2025-08-14T22:06:24.5828618Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5828870Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5829164Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5829615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5830014Z return mod(**inputs) 2025-08-14T22:06:24.5830518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5831037Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5831547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5832048Z hidden_states = self.encoder( 2025-08-14T22:06:24.5832539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5833056Z layer_outputs = layer_module( 2025-08-14T22:06:24.5837741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5838206Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5838719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:06:24.5839237Z layer_output = apply_chunking_to_forward( 2025-08-14T22:06:24.5839745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:06:24.5840275Z return forward_fn(*input_tensors) 2025-08-14T22:06:24.5840821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:06:24.5841511Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:06:24.5842060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:06:24.5842642Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:06:24.5843122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:06:24.5843540Z return self.act(input) 2025-08-14T22:06:24.5843693Z 2025-08-14T22:06:24.5843795Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5844064Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5844319Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5844570Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5844819Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5845063Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5845308Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5845555Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5845845Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5846284Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5846690Z return mod(**inputs) 2025-08-14T22:06:24.5847158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5847724Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5848303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5849185Z hidden_states = self.encoder( 2025-08-14T22:06:24.5849750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5850253Z layer_outputs = layer_module( 2025-08-14T22:06:24.5850678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5851131Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5851637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5852156Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5852660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5853194Z self_outputs = self.self( 2025-08-14T22:06:24.5853675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5854283Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5854890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:06:24.5855393Z x = self.depthwise(hidden_states) 2025-08-14T22:06:24.5855556Z 2025-08-14T22:06:24.5855692Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5856139Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5856545Z return mod(**inputs) 2025-08-14T22:06:24.5857012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5857522Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5858048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5858557Z hidden_states = self.encoder( 2025-08-14T22:06:24.5859047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5859535Z layer_outputs = layer_module( 2025-08-14T22:06:24.5859954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5860437Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5860939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5861445Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5861953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5870777Z self_outputs = self.self( 2025-08-14T22:06:24.5871413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5872221Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5873058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:06:24.5873568Z x = self.pointwise(x) 2025-08-14T22:06:24.5873709Z 2025-08-14T22:06:24.5873815Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5874104Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5874556Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5874962Z return mod(**inputs) 2025-08-14T22:06:24.5875428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5875944Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5876496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5877127Z hidden_states = self.encoder( 2025-08-14T22:06:24.5877614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5878112Z layer_outputs = layer_module( 2025-08-14T22:06:24.5878540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5878987Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5879493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5880031Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5880537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5881024Z self_outputs = self.self( 2025-08-14T22:06:24.5881590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:06:24.5882147Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:06:24.5882366Z 2025-08-14T22:06:24.5882469Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5882721Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5883008Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5883452Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5883852Z return mod(**inputs) 2025-08-14T22:06:24.5884324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5884878Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5885386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5885877Z hidden_states = self.encoder( 2025-08-14T22:06:24.5886355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5886850Z layer_outputs = layer_module( 2025-08-14T22:06:24.5887294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5887751Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5888255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5888764Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5889265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5889769Z self_outputs = self.self( 2025-08-14T22:06:24.5890244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:06:24.5890801Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:06:24.5891057Z 2025-08-14T22:06:24.5891183Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5891545Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5891984Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5892389Z return mod(**inputs) 2025-08-14T22:06:24.5892855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5893369Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5893872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5894388Z hidden_states = self.encoder( 2025-08-14T22:06:24.5894873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5895413Z layer_outputs = layer_module( 2025-08-14T22:06:24.5895841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5896290Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5896790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5897292Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5897823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5898322Z self_outputs = self.self( 2025-08-14T22:06:24.5898806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:06:24.5899344Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:06:24.5899563Z 2025-08-14T22:06:24.5899663Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5899913Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5900190Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5900634Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5901034Z return mod(**inputs) 2025-08-14T22:06:24.5901500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5902034Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5902538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5903034Z hidden_states = self.encoder( 2025-08-14T22:06:24.5903521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5904016Z layer_outputs = layer_module( 2025-08-14T22:06:24.5904448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5904930Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5905428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:06:24.5912362Z layer_output = apply_chunking_to_forward( 2025-08-14T22:06:24.5912872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:06:24.5913371Z return forward_fn(*input_tensors) 2025-08-14T22:06:24.5913902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:06:24.5914509Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:06:24.5915075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:06:24.5915620Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:06:24.5916098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:06:24.5916524Z return self.act(input) 2025-08-14T22:06:24.5916666Z 2025-08-14T22:06:24.5916773Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5917024Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5917277Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5917531Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5917767Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5918039Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5918282Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5918515Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5918799Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5919246Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5919654Z return mod(**inputs) 2025-08-14T22:06:24.5920169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5920750Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5921351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5921844Z hidden_states = self.encoder( 2025-08-14T22:06:24.5922329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5922825Z layer_outputs = layer_module( 2025-08-14T22:06:24.5923255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5923695Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5924232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5924758Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5925262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5925748Z self_outputs = self.self( 2025-08-14T22:06:24.5926253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5926859Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5927460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:06:24.5927962Z x = self.depthwise(hidden_states) 2025-08-14T22:06:24.5928136Z 2025-08-14T22:06:24.5928267Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5928733Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5929129Z return mod(**inputs) 2025-08-14T22:06:24.5929594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5930105Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5930618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5931108Z hidden_states = self.encoder( 2025-08-14T22:06:24.5931599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5932091Z layer_outputs = layer_module( 2025-08-14T22:06:24.5932514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5932974Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5933471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5933981Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5934485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5939260Z self_outputs = self.self( 2025-08-14T22:06:24.5939765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5940366Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5940974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:06:24.5941464Z x = self.pointwise(x) 2025-08-14T22:06:24.5941602Z 2025-08-14T22:06:24.5941711Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5941989Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5942439Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5942854Z return mod(**inputs) 2025-08-14T22:06:24.5943341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5943848Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5944356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5944854Z hidden_states = self.encoder( 2025-08-14T22:06:24.5945330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5945824Z layer_outputs = layer_module( 2025-08-14T22:06:24.5946258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5946709Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5947205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5947718Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5948247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5949068Z self_outputs = self.self( 2025-08-14T22:06:24.5949632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:06:24.5950192Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:06:24.5950410Z 2025-08-14T22:06:24.5950516Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5950808Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5951091Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5951535Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5951936Z return mod(**inputs) 2025-08-14T22:06:24.5952397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5952909Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5953461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5953953Z hidden_states = self.encoder( 2025-08-14T22:06:24.5954441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5954932Z layer_outputs = layer_module( 2025-08-14T22:06:24.5955359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5955800Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5956304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5956820Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5957369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5957919Z self_outputs = self.self( 2025-08-14T22:06:24.5958405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:06:24.5958973Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:06:24.5959206Z 2025-08-14T22:06:24.5959303Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5959596Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5960044Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5960449Z return mod(**inputs) 2025-08-14T22:06:24.5960950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5961557Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5962062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5962553Z hidden_states = self.encoder( 2025-08-14T22:06:24.5963039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5963555Z layer_outputs = layer_module( 2025-08-14T22:06:24.5968180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5968632Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5969138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5969649Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5970148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5970686Z self_outputs = self.self( 2025-08-14T22:06:24.5971173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:06:24.5971719Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:06:24.5971929Z 2025-08-14T22:06:24.5972033Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5972284Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5972620Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5973062Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5973464Z return mod(**inputs) 2025-08-14T22:06:24.5973937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5974446Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5974942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5975436Z hidden_states = self.encoder( 2025-08-14T22:06:24.5975925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5976420Z layer_outputs = layer_module( 2025-08-14T22:06:24.5976839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5977286Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5977782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:06:24.5978344Z layer_output = apply_chunking_to_forward( 2025-08-14T22:06:24.5978920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:06:24.5979418Z return forward_fn(*input_tensors) 2025-08-14T22:06:24.5979984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:06:24.5980579Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:06:24.5981137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:06:24.5981687Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:06:24.5982170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:06:24.5982586Z return self.act(input) 2025-08-14T22:06:24.5982733Z 2025-08-14T22:06:24.5982833Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5983112Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5983358Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5983612Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5983857Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5984094Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5984346Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5984594Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.5984880Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5985332Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5985742Z return mod(**inputs) 2025-08-14T22:06:24.5986218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.5986723Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.5987238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.5987760Z hidden_states = self.encoder( 2025-08-14T22:06:24.5988257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.5988747Z layer_outputs = layer_module( 2025-08-14T22:06:24.5989172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.5989622Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.5990156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.5990672Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.5991175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.5991680Z self_outputs = self.self( 2025-08-14T22:06:24.5992151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.5992809Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.5997668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:06:24.5998186Z x = self.depthwise(hidden_states) 2025-08-14T22:06:24.5998349Z 2025-08-14T22:06:24.5998483Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.5998939Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.5999343Z return mod(**inputs) 2025-08-14T22:06:24.5999810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6000324Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6000840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6001414Z hidden_states = self.encoder( 2025-08-14T22:06:24.6001920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6002414Z layer_outputs = layer_module( 2025-08-14T22:06:24.6002840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6003301Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6003800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6004311Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6004847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6005335Z self_outputs = self.self( 2025-08-14T22:06:24.6005818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.6006422Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.6007042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:06:24.6007638Z x = self.pointwise(x) 2025-08-14T22:06:24.6007783Z 2025-08-14T22:06:24.6007881Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6008173Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6008611Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6009018Z return mod(**inputs) 2025-08-14T22:06:24.6009485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6010027Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6010529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6011026Z hidden_states = self.encoder( 2025-08-14T22:06:24.6011516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6012016Z layer_outputs = layer_module( 2025-08-14T22:06:24.6012458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6012909Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6013409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6013917Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6014429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6014927Z self_outputs = self.self( 2025-08-14T22:06:24.6015408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:06:24.6015958Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:06:24.6016184Z 2025-08-14T22:06:24.6016284Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6016541Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6016818Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6017263Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6017671Z return mod(**inputs) 2025-08-14T22:06:24.6018145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6018654Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6019180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6019678Z hidden_states = self.encoder( 2025-08-14T22:06:24.6020155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6020649Z layer_outputs = layer_module( 2025-08-14T22:06:24.6021079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6021538Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6030541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6031247Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6031931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6032600Z self_outputs = self.self( 2025-08-14T22:06:24.6033203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:06:24.6033772Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:06:24.6034002Z 2025-08-14T22:06:24.6034105Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6034388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6034831Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6035236Z return mod(**inputs) 2025-08-14T22:06:24.6035705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6036297Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6038936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6039434Z hidden_states = self.encoder( 2025-08-14T22:06:24.6039925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6040412Z layer_outputs = layer_module( 2025-08-14T22:06:24.6040839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6041424Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6041924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6042437Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6042946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6043450Z self_outputs = self.self( 2025-08-14T22:06:24.6043922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:06:24.6044470Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:06:24.6044682Z 2025-08-14T22:06:24.6044791Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6045041Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6045337Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6045785Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6046196Z return mod(**inputs) 2025-08-14T22:06:24.6046657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6047166Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6047670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6048185Z hidden_states = self.encoder( 2025-08-14T22:06:24.6049086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6049600Z layer_outputs = layer_module( 2025-08-14T22:06:24.6050030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6050481Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6051105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:06:24.6051619Z layer_output = apply_chunking_to_forward( 2025-08-14T22:06:24.6052194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:06:24.6052688Z return forward_fn(*input_tensors) 2025-08-14T22:06:24.6053232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:06:24.6053832Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:06:24.6054385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:06:24.6054997Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:06:24.6055481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:06:24.6055914Z return self.act(input) 2025-08-14T22:06:24.6056054Z 2025-08-14T22:06:24.6056151Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6056403Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6056683Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6056918Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6057158Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6057401Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6057644Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6057879Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6058160Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6058606Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6059029Z return mod(**inputs) 2025-08-14T22:06:24.6059494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6060014Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6060515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6061013Z hidden_states = self.encoder( 2025-08-14T22:06:24.6061507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6062003Z layer_outputs = layer_module( 2025-08-14T22:06:24.6062424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6062871Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6063380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6063887Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6064384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6064885Z self_outputs = self.self( 2025-08-14T22:06:24.6069605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.6070251Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.6070866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:06:24.6071369Z x = self.depthwise(hidden_states) 2025-08-14T22:06:24.6071537Z 2025-08-14T22:06:24.6071675Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6072121Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6072524Z return mod(**inputs) 2025-08-14T22:06:24.6072994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6073534Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6074044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6074545Z hidden_states = self.encoder( 2025-08-14T22:06:24.6074889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6074988Z layer_outputs = layer_module( 2025-08-14T22:06:24.6075270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6075379Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6075718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6075822Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6076170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6076280Z self_outputs = self.self( 2025-08-14T22:06:24.6076623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.6076831Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.6077165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:06:24.6077259Z x = self.pointwise(x) 2025-08-14T22:06:24.6077295Z 2025-08-14T22:06:24.6077393Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6077522Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6077785Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6077870Z return mod(**inputs) 2025-08-14T22:06:24.6078218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6078321Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6078658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6078758Z hidden_states = self.encoder( 2025-08-14T22:06:24.6079096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6079185Z layer_outputs = layer_module( 2025-08-14T22:06:24.6079472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6079608Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6080032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6080135Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6080470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6080593Z self_outputs = self.self( 2025-08-14T22:06:24.6080929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:06:24.6081165Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:06:24.6081185Z 2025-08-14T22:06:24.6081284Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6081380Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6081514Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6081764Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6081844Z return mod(**inputs) 2025-08-14T22:06:24.6082218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6082324Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6082671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6082761Z hidden_states = self.encoder( 2025-08-14T22:06:24.6083094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6083189Z layer_outputs = layer_module( 2025-08-14T22:06:24.6083471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6083568Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6083953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6084055Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6084423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6084511Z self_outputs = self.self( 2025-08-14T22:06:24.6084846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:06:24.6085007Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:06:24.6085020Z 2025-08-14T22:06:24.6085113Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6085273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6085523Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6085606Z return mod(**inputs) 2025-08-14T22:06:24.6085951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6086053Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6086388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6086485Z hidden_states = self.encoder( 2025-08-14T22:06:24.6086818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6086914Z layer_outputs = layer_module( 2025-08-14T22:06:24.6087192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6087290Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6087630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6087730Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6088117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6088212Z self_outputs = self.self( 2025-08-14T22:06:24.6088572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:06:24.6088716Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:06:24.6088729Z 2025-08-14T22:06:24.6088828Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6088923Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6089059Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6089313Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6089404Z return mod(**inputs) 2025-08-14T22:06:24.6089774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6089878Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6090223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6090314Z hidden_states = self.encoder( 2025-08-14T22:06:24.6090649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6090745Z layer_outputs = layer_module( 2025-08-14T22:06:24.6091027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6091133Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6091467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:06:24.6091571Z layer_output = apply_chunking_to_forward( 2025-08-14T22:06:24.6091910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:06:24.6092029Z return forward_fn(*input_tensors) 2025-08-14T22:06:24.6092412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:06:24.6092576Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:06:24.6092913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:06:24.6093083Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:06:24.6093353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:06:24.6093440Z return self.act(input) 2025-08-14T22:06:24.6093453Z 2025-08-14T22:06:24.6093558Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6093655Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6093759Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6093850Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6093944Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6094068Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6094200Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6094293Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6098665Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6098925Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6099011Z return mod(**inputs) 2025-08-14T22:06:24.6099354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6099456Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6099802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6099891Z hidden_states = self.encoder( 2025-08-14T22:06:24.6100256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6100354Z layer_outputs = layer_module( 2025-08-14T22:06:24.6100637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6100752Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6101091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6101191Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6101531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6101640Z self_outputs = self.self( 2025-08-14T22:06:24.6101978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.6102189Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.6102521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:06:24.6102629Z x = self.depthwise(hidden_states) 2025-08-14T22:06:24.6102642Z 2025-08-14T22:06:24.6102770Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6103026Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6103111Z return mod(**inputs) 2025-08-14T22:06:24.6103445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6103554Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6103910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6103998Z hidden_states = self.encoder( 2025-08-14T22:06:24.6104339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6104425Z layer_outputs = layer_module( 2025-08-14T22:06:24.6104704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6104826Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6105159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6105263Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6105602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6105690Z self_outputs = self.self( 2025-08-14T22:06:24.6106030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.6106225Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.6106567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:06:24.6106653Z x = self.pointwise(x) 2025-08-14T22:06:24.6106668Z 2025-08-14T22:06:24.6106762Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6106894Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6107143Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6107221Z return mod(**inputs) 2025-08-14T22:06:24.6107570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6107671Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6108037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6108123Z hidden_states = self.encoder( 2025-08-14T22:06:24.6108458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6108579Z layer_outputs = layer_module( 2025-08-14T22:06:24.6108889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6109057Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6109422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6109525Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6109866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6109953Z self_outputs = self.self( 2025-08-14T22:06:24.6110288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:06:24.6110441Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:06:24.6110454Z 2025-08-14T22:06:24.6110550Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6110655Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6110781Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6111034Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6111124Z return mod(**inputs) 2025-08-14T22:06:24.6111464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6111588Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6111931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6112018Z hidden_states = self.encoder( 2025-08-14T22:06:24.6112360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6112446Z layer_outputs = layer_module( 2025-08-14T22:06:24.6112747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6112853Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6113187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6113291Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6113636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6113725Z self_outputs = self.self( 2025-08-14T22:06:24.6114067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:06:24.6114222Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:06:24.6114235Z 2025-08-14T22:06:24.6114330Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6114468Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6114717Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6114805Z return mod(**inputs) 2025-08-14T22:06:24.6115142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6115245Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6115611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6115698Z hidden_states = self.encoder( 2025-08-14T22:06:24.6116036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6116128Z layer_outputs = layer_module( 2025-08-14T22:06:24.6116409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6116519Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6116853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6116953Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6117331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6117419Z self_outputs = self.self( 2025-08-14T22:06:24.6117764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:06:24.6117903Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:06:24.6117915Z 2025-08-14T22:06:24.6118015Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6118116Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6118248Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6118497Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6118584Z return mod(**inputs) 2025-08-14T22:06:24.6118923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6119055Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6119392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6119481Z hidden_states = self.encoder( 2025-08-14T22:06:24.6119828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6119916Z layer_outputs = layer_module( 2025-08-14T22:06:24.6120194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6120316Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6120650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:06:24.6120762Z layer_output = apply_chunking_to_forward( 2025-08-14T22:06:24.6121165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:06:24.6121263Z return forward_fn(*input_tensors) 2025-08-14T22:06:24.6121654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:06:24.6121805Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:06:24.6122152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:06:24.6122294Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:06:24.6122562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:06:24.6122657Z return self.act(input) 2025-08-14T22:06:24.6122669Z 2025-08-14T22:06:24.6122767Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6122866Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6122967Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6123086Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6123204Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6123329Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6127650Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6127757Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6127888Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6128143Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6128232Z return mod(**inputs) 2025-08-14T22:06:24.6128571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6128674Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6129039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6129130Z hidden_states = self.encoder( 2025-08-14T22:06:24.6129528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6129615Z layer_outputs = layer_module( 2025-08-14T22:06:24.6129895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6129998Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6130333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6130442Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6130773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6130864Z self_outputs = self.self( 2025-08-14T22:06:24.6131230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.6131433Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.6131769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:06:24.6131870Z x = self.depthwise(hidden_states) 2025-08-14T22:06:24.6131883Z 2025-08-14T22:06:24.6132014Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6132308Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6132390Z return mod(**inputs) 2025-08-14T22:06:24.6132725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6132835Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6133171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6133271Z hidden_states = self.encoder( 2025-08-14T22:06:24.6133614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6133705Z layer_outputs = layer_module( 2025-08-14T22:06:24.6133993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6134092Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6134430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6134538Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6134876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6134972Z self_outputs = self.self( 2025-08-14T22:06:24.6135327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.6135526Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.6135873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:06:24.6135965Z x = self.pointwise(x) 2025-08-14T22:06:24.6135978Z 2025-08-14T22:06:24.6136083Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6136215Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6136468Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6136561Z return mod(**inputs) 2025-08-14T22:06:24.6136923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6137027Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6137371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6137457Z hidden_states = self.encoder( 2025-08-14T22:06:24.6137857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6138014Z layer_outputs = layer_module( 2025-08-14T22:06:24.6138297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6138399Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6138737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6138867Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6139201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6139288Z self_outputs = self.self( 2025-08-14T22:06:24.6139631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:06:24.6139775Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:06:24.6139788Z 2025-08-14T22:06:24.6139910Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6140013Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6140141Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6140396Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6140479Z return mod(**inputs) 2025-08-14T22:06:24.6140816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6140926Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6141265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6141352Z hidden_states = self.encoder( 2025-08-14T22:06:24.6141692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6141780Z layer_outputs = layer_module( 2025-08-14T22:06:24.6142069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6142165Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6142498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6142610Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6142944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6143057Z self_outputs = self.self( 2025-08-14T22:06:24.6143392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:06:24.6143549Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:06:24.6143562Z 2025-08-14T22:06:24.6143666Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6143791Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6144036Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6144122Z return mod(**inputs) 2025-08-14T22:06:24.6144476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6144591Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6144931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6145019Z hidden_states = self.encoder( 2025-08-14T22:06:24.6145362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6145447Z layer_outputs = layer_module( 2025-08-14T22:06:24.6145727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6145830Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6146163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6146273Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6146631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6146718Z self_outputs = self.self( 2025-08-14T22:06:24.6147059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:06:24.6147198Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:06:24.6147211Z 2025-08-14T22:06:24.6147314Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6147413Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6147559Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6147810Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6147891Z return mod(**inputs) 2025-08-14T22:06:24.6148227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6148336Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6149030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6149141Z hidden_states = self.encoder( 2025-08-14T22:06:24.6149479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6149566Z layer_outputs = layer_module( 2025-08-14T22:06:24.6149862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6149966Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6150298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:06:24.6150408Z layer_output = apply_chunking_to_forward( 2025-08-14T22:06:24.6150746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:06:24.6150850Z return forward_fn(*input_tensors) 2025-08-14T22:06:24.6151277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:06:24.6151428Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:06:24.6151769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:06:24.6151911Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:06:24.6152246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:06:24.6152333Z return self.act(input) 2025-08-14T22:06:24.6152346Z 2025-08-14T22:06:24.6156606Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6156753Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6156847Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6156944Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6157041Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6157136Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6157238Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6157332Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6157460Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6157719Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6157804Z return mod(**inputs) 2025-08-14T22:06:24.6158145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6158263Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6158600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6158729Z hidden_states = self.encoder( 2025-08-14T22:06:24.6159066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6159152Z layer_outputs = layer_module( 2025-08-14T22:06:24.6159438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6159536Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6159913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6160019Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6160352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6160447Z self_outputs = self.self( 2025-08-14T22:06:24.6160783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.6160986Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.6161401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:06:24.6161499Z x = self.depthwise(hidden_states) 2025-08-14T22:06:24.6161512Z 2025-08-14T22:06:24.6161647Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6161897Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6161977Z return mod(**inputs) 2025-08-14T22:06:24.6162319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6162422Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6162758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6162879Z hidden_states = self.encoder( 2025-08-14T22:06:24.6163217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6163313Z layer_outputs = layer_module( 2025-08-14T22:06:24.6170356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6170532Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6170894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6171012Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6171433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6171535Z self_outputs = self.self( 2025-08-14T22:06:24.6171892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:06:24.6172094Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:06:24.6172437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:06:24.6172540Z x = self.pointwise(x) 2025-08-14T22:06:24.6172555Z 2025-08-14T22:06:24.6172653Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6172793Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6173045Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6173129Z return mod(**inputs) 2025-08-14T22:06:24.6173478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6173609Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6173954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6174045Z hidden_states = self.encoder( 2025-08-14T22:06:24.6174382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6174500Z layer_outputs = layer_module( 2025-08-14T22:06:24.6174782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6174881Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6175228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6175333Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6175677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6175766Z self_outputs = self.self( 2025-08-14T22:06:24.6176102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:06:24.6176258Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:06:24.6176271Z 2025-08-14T22:06:24.6176369Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6176473Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6176602Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6176854Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6176945Z return mod(**inputs) 2025-08-14T22:06:24.6177286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6177391Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6177759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6177850Z hidden_states = self.encoder( 2025-08-14T22:06:24.6178199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6178289Z layer_outputs = layer_module( 2025-08-14T22:06:24.6178572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6178677Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6179034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6179137Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6179483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6179570Z self_outputs = self.self( 2025-08-14T22:06:24.6179914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:06:24.6180073Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:06:24.6180085Z 2025-08-14T22:06:24.6180185Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6180326Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6180576Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6180665Z return mod(**inputs) 2025-08-14T22:06:24.6181014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6190546Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6191033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6191128Z hidden_states = self.encoder( 2025-08-14T22:06:24.6191481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6191581Z layer_outputs = layer_module( 2025-08-14T22:06:24.6191864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6192000Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6192342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:06:24.6192450Z self_attention_outputs = self.attention( 2025-08-14T22:06:24.6192795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:06:24.6192883Z self_outputs = self.self( 2025-08-14T22:06:24.6193219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:06:24.6193370Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:06:24.6193385Z 2025-08-14T22:06:24.6193487Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6193593Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6193725Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:06:24.6193976Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:06:24.6194067Z return mod(**inputs) 2025-08-14T22:06:24.6194406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:06:24.6194520Z generator_hidden_states = self.convbert( 2025-08-14T22:06:24.6194878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:06:24.6194968Z hidden_states = self.encoder( 2025-08-14T22:06:24.6195315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:06:24.6195403Z layer_outputs = layer_module( 2025-08-14T22:06:24.6195743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:06:24.6195853Z return super().__call__(*args, **kwargs) 2025-08-14T22:06:24.6198273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:06:24.6198390Z layer_output = apply_chunking_to_forward( 2025-08-14T22:06:24.6198748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:06:24.6198852Z return forward_fn(*input_tensors) 2025-08-14T22:06:24.6199244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:06:24.6199396Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:06:24.6199743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:06:24.6199888Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:06:24.6200160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:06:24.6200259Z return self.act(input) 2025-08-14T22:06:24.6200272Z 2025-08-14T22:06:24.6200370Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6200492Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6200602Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6200697Z cudagraph partition due to non gpu ops 2025-08-14T22:06:24.6200797Z cudagraph partition due to non gpu ops 2025-08-14T22:06:33.4463943Z Compilation time (from dynamo_timed): 31.016774071 2025-08-14T22:06:33.4539976Z pass 2025-08-14T22:06:33.4541782Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:06:33.4542908Z TIMING: _recursive_pre_grad_passes:0.08321 _recursive_joint_graph_passes:0.86603 _recursive_post_grad_passes:0.22873 async_compile.wait:0.94188 code_gen:8.3849 inductor_compile:13.06035 backend_compile:25.18783 gc:0.00102 entire_frame_compile:31.01677 total_wall_time:31.01677 2025-08-14T22:06:33.4544303Z STATS: call_* op count: 634 | FakeTensorMode.__torch_dispatch__:45792 | FakeTensor.__torch_dispatch__:6043 | ProxyTorchDispatchMode.__torch_dispatch__:9702 2025-08-14T22:06:33.4545017Z Dynamo produced 1 graphs covering 634 ops with 0 graph breaks (0 unique) 2025-08-14T22:06:35.7358373Z accuracy pass_rate=95.35% 2025-08-14T22:06:35.7364451Z calls_captured gmean=0.00x mean=609.233x 2025-08-14T22:06:35.7373959Z unique_graphs gmean=0.00x mean=1.093x 2025-08-14T22:06:35.7375848Z graph_breaks gmean=0.00x mean=0.140x 2025-08-14T22:06:35.7378298Z unique_graph_breaks gmean=0.00x mean=0.047x 2025-08-14T22:06:35.7381205Z autograd_captures gmean=0.00x mean=0.000x 2025-08-14T22:06:35.7388539Z autograd_compiles gmean=0.00x mean=0.000x 2025-08-14T22:06:35.7388857Z cudagraph_skips gmean=0.00x mean=1.093x 2025-08-14T22:06:35.7389164Z compilation_latency mean=27.633 seconds 2025-08-14T22:06:36.7393746Z + python benchmarks/dynamo/check_accuracy.py --actual /var/lib/jenkins/workspace/test/test-reports/inference_huggingface.csv --expected benchmarks/dynamo/ci_expected_accuracy/cpu_inductor_freezing_huggingface_inference.csv 2025-08-14T22:06:37.1419493Z AlbertForMaskedLM PASS 2025-08-14T22:06:37.1427100Z AlbertForQuestionAnswering PASS 2025-08-14T22:06:37.1430817Z AllenaiLongformerBase PASS 2025-08-14T22:06:37.1436972Z BartForCausalLM PASS 2025-08-14T22:06:37.1439137Z BartForConditionalGeneration PASS 2025-08-14T22:06:37.1443167Z BertForMaskedLM PASS 2025-08-14T22:06:37.1451708Z BertForQuestionAnswering PASS 2025-08-14T22:06:37.1456159Z BlenderbotForCausalLM XFAIL 2025-08-14T22:06:37.1460075Z BlenderbotSmallForCausalLM PASS 2025-08-14T22:06:37.1470087Z BlenderbotSmallForConditionalGeneration PASS 2025-08-14T22:06:37.1470434Z CamemBert PASS 2025-08-14T22:06:37.1472029Z DebertaV2ForMaskedLM XFAIL 2025-08-14T22:06:37.1475841Z DebertaV2ForQuestionAnswering PASS 2025-08-14T22:06:37.1488483Z DistilBertForMaskedLM PASS 2025-08-14T22:06:37.1494984Z DistilBertForQuestionAnswering PASS 2025-08-14T22:06:37.1498854Z DistillGPT2 PASS 2025-08-14T22:06:37.1502613Z ElectraForCausalLM PASS 2025-08-14T22:06:37.1511553Z ElectraForQuestionAnswering PASS 2025-08-14T22:06:37.1511890Z GPT2ForSequenceClassification PASS 2025-08-14T22:06:37.1514901Z GoogleFnet PASS 2025-08-14T22:06:37.1518392Z LayoutLMForMaskedLM PASS 2025-08-14T22:06:37.1526846Z LayoutLMForSequenceClassification PASS 2025-08-14T22:06:37.1530640Z M2M100ForConditionalGeneration PASS 2025-08-14T22:06:37.1538546Z MBartForCausalLM PASS 2025-08-14T22:06:37.1538865Z MBartForConditionalGeneration PASS 2025-08-14T22:06:37.1542500Z MT5ForConditionalGeneration PASS 2025-08-14T22:06:37.1546283Z MegatronBertForCausalLM PASS 2025-08-14T22:06:37.1557375Z MegatronBertForQuestionAnswering PASS 2025-08-14T22:06:37.1559189Z MobileBertForMaskedLM PASS 2025-08-14T22:06:37.1567479Z MobileBertForQuestionAnswering PASS 2025-08-14T22:06:37.1569436Z OPTForCausalLM PASS 2025-08-14T22:06:37.1571419Z PLBartForCausalLM PASS 2025-08-14T22:06:37.1575148Z PLBartForConditionalGeneration PASS 2025-08-14T22:06:37.1583460Z PegasusForCausalLM PASS 2025-08-14T22:06:37.1587278Z PegasusForConditionalGeneration PASS 2025-08-14T22:06:37.1591063Z RobertaForCausalLM PASS 2025-08-14T22:06:37.1600671Z RobertaForQuestionAnswering PASS 2025-08-14T22:06:37.1600991Z T5ForConditionalGeneration PASS 2025-08-14T22:06:37.1603197Z T5Small PASS 2025-08-14T22:06:37.1610389Z TrOCRForCausalLM PASS 2025-08-14T22:06:37.1619667Z XGLMForCausalLM PASS 2025-08-14T22:06:37.1629648Z XLNetLMHeadModel PASS 2025-08-14T22:06:37.1629933Z YituTechConvBert PASS 2025-08-14T22:06:37.2184839Z + python benchmarks/dynamo/check_graph_breaks.py --actual /var/lib/jenkins/workspace/test/test-reports/inference_huggingface.csv --expected benchmarks/dynamo/ci_expected_accuracy/cpu_inductor_freezing_huggingface_inference.csv 2025-08-14T22:06:37.6314188Z AlbertForMaskedLM PASS 2025-08-14T22:06:37.6314620Z AlbertForQuestionAnswering PASS 2025-08-14T22:06:37.6317414Z AllenaiLongformerBase PASS 2025-08-14T22:06:37.6325889Z BartForCausalLM PASS 2025-08-14T22:06:37.6329672Z BartForConditionalGeneration PASS 2025-08-14T22:06:37.6333659Z BertForMaskedLM PASS 2025-08-14T22:06:37.6337796Z BertForQuestionAnswering PASS 2025-08-14T22:06:37.6341606Z BlenderbotForCausalLM PASS 2025-08-14T22:06:37.6345395Z BlenderbotSmallForCausalLM PASS 2025-08-14T22:06:37.6358423Z BlenderbotSmallForConditionalGeneration PASS 2025-08-14T22:06:37.6366802Z CamemBert PASS 2025-08-14T22:06:37.6369161Z DebertaV2ForMaskedLM PASS 2025-08-14T22:06:37.6372940Z DebertaV2ForQuestionAnswering PASS 2025-08-14T22:06:37.6385042Z DistilBertForMaskedLM PASS 2025-08-14T22:06:37.6385632Z DistilBertForQuestionAnswering PASS 2025-08-14T22:06:37.6385934Z DistillGPT2 PASS 2025-08-14T22:06:37.6388664Z ElectraForCausalLM PASS 2025-08-14T22:06:37.6396769Z ElectraForQuestionAnswering PASS 2025-08-14T22:06:37.6400641Z GPT2ForSequenceClassification PASS 2025-08-14T22:06:37.6404672Z GoogleFnet PASS 2025-08-14T22:06:37.6410450Z LayoutLMForMaskedLM PASS 2025-08-14T22:06:37.6412578Z LayoutLMForSequenceClassification PASS 2025-08-14T22:06:37.6416364Z M2M100ForConditionalGeneration PASS 2025-08-14T22:06:37.6426517Z MBartForCausalLM PASS 2025-08-14T22:06:37.6428961Z MBartForConditionalGeneration PASS 2025-08-14T22:06:37.6432488Z MT5ForConditionalGeneration PASS 2025-08-14T22:06:37.6439564Z MegatronBertForCausalLM PASS 2025-08-14T22:06:37.6440150Z MegatronBertForQuestionAnswering PASS 2025-08-14T22:06:37.6444176Z MobileBertForMaskedLM PASS 2025-08-14T22:06:37.6447986Z MobileBertForQuestionAnswering PASS 2025-08-14T22:06:37.6456708Z OPTForCausalLM PASS 2025-08-14T22:06:37.6460714Z PLBartForCausalLM PASS 2025-08-14T22:06:37.6472500Z PLBartForConditionalGeneration PASS 2025-08-14T22:06:37.6472808Z PegasusForCausalLM PASS 2025-08-14T22:06:37.6473116Z PegasusForConditionalGeneration PASS 2025-08-14T22:06:37.6476149Z RobertaForCausalLM PASS 2025-08-14T22:06:37.6488728Z RobertaForQuestionAnswering PASS 2025-08-14T22:06:37.6492643Z T5ForConditionalGeneration PASS 2025-08-14T22:06:37.6501437Z T5Small PASS 2025-08-14T22:06:37.6501727Z TrOCRForCausalLM PASS 2025-08-14T22:06:37.6504432Z XGLMForCausalLM PASS_BUT_FLAKY 2025-08-14T22:06:37.6512004Z XLNetLMHeadModel PASS 2025-08-14T22:06:37.6512404Z YituTechConvBert PASS 2025-08-14T22:06:37.7068580Z + sccache_epilogue 2025-08-14T22:06:37.7068887Z + echo '::group::Sccache Compilation Log' 2025-08-14T22:06:37.7069492Z ##[group]Sccache Compilation Log 2025-08-14T22:06:37.7069838Z + echo '=================== sccache compilation log ===================' 2025-08-14T22:06:37.7070175Z =================== sccache compilation log =================== 2025-08-14T22:06:37.7070642Z + python /var/lib/jenkins/workspace/.ci/pytorch/print_sccache_log.py /var/lib/jenkins/sccache_error.log 2025-08-14T22:06:37.7375433Z + echo '=========== If your build fails, please take a look at the log above for possible reasons ===========' 2025-08-14T22:06:37.7376107Z =========== If your build fails, please take a look at the log above for possible reasons =========== 2025-08-14T22:06:37.7376517Z + sccache --show-stats 2025-08-14T22:06:37.7415575Z Compile requests 376 2025-08-14T22:06:37.7415886Z Compile requests executed 0 2025-08-14T22:06:37.7416176Z Cache hits 0 2025-08-14T22:06:37.7416463Z Cache misses 0 2025-08-14T22:06:37.7416710Z Cache hits rate - 2025-08-14T22:06:37.7416948Z Cache timeouts 0 2025-08-14T22:06:37.7417177Z Cache read errors 0 2025-08-14T22:06:37.7417406Z Forced recaches 0 2025-08-14T22:06:37.7417638Z Cache write errors 0 2025-08-14T22:06:37.7417862Z Cache errors 0 2025-08-14T22:06:37.7418098Z Compilations 0 2025-08-14T22:06:37.7418336Z Compilation failures 0 2025-08-14T22:06:37.7418571Z Non-cacheable compilations 0 2025-08-14T22:06:37.7418817Z Non-cacheable calls 41 2025-08-14T22:06:37.7419056Z Non-compilation calls 335 2025-08-14T22:06:37.7419301Z Unsupported compiler calls 0 2025-08-14T22:06:37.7419544Z Average cache write 0.000 s 2025-08-14T22:06:37.7419801Z Average compiler 0.000 s 2025-08-14T22:06:37.7420051Z Average cache read hit 0.000 s 2025-08-14T22:06:37.7420435Z Failed distributed compilations 0 2025-08-14T22:06:37.7420697Z 2025-08-14T22:06:37.7420784Z Non-cacheable reasons: 2025-08-14T22:06:37.7421038Z -E 41 2025-08-14T22:06:37.7421195Z 2025-08-14T22:06:37.7421384Z Cache location s3, name: ossci-compiler-cache-circleci-v2, prefix: / 2025-08-14T22:06:37.7421735Z Version (client) 0.10.0 2025-08-14T22:06:37.7421975Z + sccache --stop-server 2025-08-14T22:06:37.7431603Z Stopping sccache server... 2025-08-14T22:06:37.7433715Z Compile requests 376 2025-08-14T22:06:37.7434022Z Compile requests executed 0 2025-08-14T22:06:37.7434312Z Cache hits 0 2025-08-14T22:06:37.7434665Z Cache misses 0 2025-08-14T22:06:37.7434943Z Cache hits rate - 2025-08-14T22:06:37.7443635Z Cache timeouts 0 2025-08-14T22:06:37.7443914Z Cache read errors 0 2025-08-14T22:06:37.7444190Z Forced recaches 0 2025-08-14T22:06:37.7444469Z Cache write errors 0 2025-08-14T22:06:37.7444743Z Cache errors 0 2025-08-14T22:06:37.7445007Z Compilations 0 2025-08-14T22:06:37.7445284Z Compilation failures 0 2025-08-14T22:06:37.7445566Z Non-cacheable compilations 0 2025-08-14T22:06:37.7445851Z Non-cacheable calls 41 2025-08-14T22:06:37.7446128Z Non-compilation calls 335 2025-08-14T22:06:37.7446411Z Unsupported compiler calls 0 2025-08-14T22:06:37.7446699Z Average cache write 0.000 s 2025-08-14T22:06:37.7446989Z Average compiler 0.000 s 2025-08-14T22:06:37.7447277Z Average cache read hit 0.000 s 2025-08-14T22:06:37.7447590Z Failed distributed compilations 0 2025-08-14T22:06:37.7447751Z 2025-08-14T22:06:37.7447837Z Non-cacheable reasons: 2025-08-14T22:06:37.7448043Z -E 41 2025-08-14T22:06:37.7448199Z 2025-08-14T22:06:37.7448382Z Cache location s3, name: ossci-compiler-cache-circleci-v2, prefix: / 2025-08-14T22:06:37.7449080Z Version (client) 0.10.0 2025-08-14T22:06:37.7449368Z + echo ::endgroup:: 2025-08-14T22:06:37.7451868Z ##[endgroup] 2025-08-14T22:06:37.7452056Z + cleanup_workspace 2025-08-14T22:06:37.7452485Z + echo 'sudo may print the following warning message that can be ignored. The chown command will still run.' 2025-08-14T22:06:37.7453044Z sudo may print the following warning message that can be ignored. The chown command will still run. 2025-08-14T22:06:37.7453505Z + echo ' sudo: setrlimit(RLIMIT_STACK): Operation not permitted' 2025-08-14T22:06:37.7453862Z sudo: setrlimit(RLIMIT_STACK): Operation not permitted 2025-08-14T22:06:37.7454314Z + echo 'For more details refer to https://github.com/sudo-project/sudo/issues/42' 2025-08-14T22:06:37.7454750Z For more details refer to https://github.com/sudo-project/sudo/issues/42 2025-08-14T22:06:37.7455108Z + sudo chown -R 1000 /var/lib/jenkins/workspace 2025-08-14T22:06:38.4477848Z ##[group]Run pytorch/test-infra/.github/actions/upload-benchmark-results@main 2025-08-14T22:06:38.4478219Z with: 2025-08-14T22:06:38.4478431Z benchmark-results-dir: test/test-reports 2025-08-14T22:06:38.4478682Z dry-run: false 2025-08-14T22:06:38.4478865Z schema-version: v3 2025-08-14T22:06:38.4479285Z github-token: *** 2025-08-14T22:06:38.4479470Z env: 2025-08-14T22:06:38.4479641Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:38.4479986Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:38.4480372Z ##[endgroup] 2025-08-14T22:06:38.4499563Z ##[group]Run set -eux 2025-08-14T22:06:38.4499805Z set -eux 2025-08-14T22:06:38.4500103Z python3 -mpip install boto3==1.35.33 psutil==7.0.0 pynvml==12.0.0 2025-08-14T22:06:38.4500426Z  2025-08-14T22:06:38.4500599Z DEVICE_NAME="" 2025-08-14T22:06:38.4500799Z DEVICE_TYPE="" 2025-08-14T22:06:38.4500991Z  2025-08-14T22:06:38.4501212Z if command -v nvidia-smi; then 2025-08-14T22:06:38.4501560Z  # NB: I'm using PyTorch here to get the device name, however, it needs to 2025-08-14T22:06:38.4501983Z  # install the correct version of PyTorch manually for now. Any PyTorch 2025-08-14T22:06:38.4502393Z  # version is fine, I just use 2.7.1 to satify PYPIDEP linter 2025-08-14T22:06:38.4502718Z  python3 -mpip install torch==2.7.1 2025-08-14T22:06:38.4502986Z elif command -v rocminfo; then 2025-08-14T22:06:38.4503314Z  # NB: Installing torch on ROCm runner with pip here causes CI to fail 2025-08-14T22:06:38.4503729Z  # with a memoryview is too large error only on MI300 runners. Is pip 2025-08-14T22:06:38.4504149Z  # version on ROCm runner there too old? As a workaround, let's use the 2025-08-14T22:06:38.4504513Z  # GPU device name coming from rocminfo instead 2025-08-14T22:06:38.4504792Z  DEVICE_NAME=rocm 2025-08-14T22:06:38.4505169Z  DEVICE_TYPE=$(rocminfo | grep "Marketing Name" | tail -n1 | awk -F':' '{print $2}' | xargs) 2025-08-14T22:06:38.4505547Z fi 2025-08-14T22:06:38.4505714Z  2025-08-14T22:06:38.4505929Z echo "DEVICE_NAME=$DEVICE_NAME" >> $GITHUB_ENV 2025-08-14T22:06:38.4506253Z echo "DEVICE_TYPE=$DEVICE_TYPE" >> $GITHUB_ENV 2025-08-14T22:06:38.4518814Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:38.4519123Z env: 2025-08-14T22:06:38.4519311Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:38.4519661Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:38.4520033Z ##[endgroup] 2025-08-14T22:06:38.4549526Z + python3 -mpip install boto3==1.35.33 psutil==7.0.0 pynvml==12.0.0 2025-08-14T22:06:38.7097194Z Defaulting to user installation because normal site-packages is not writeable 2025-08-14T22:06:39.7876360Z Collecting boto3==1.35.33 2025-08-14T22:06:39.8051939Z Downloading boto3-1.35.33-py3-none-any.whl (139 kB) 2025-08-14T22:06:40.1335725Z Collecting psutil==7.0.0 2025-08-14T22:06:40.1374379Z Downloading psutil-7.0.0-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (277 kB) 2025-08-14T22:06:40.1736734Z Collecting pynvml==12.0.0 2025-08-14T22:06:40.1778287Z Downloading pynvml-12.0.0-py3-none-any.whl (26 kB) 2025-08-14T22:06:41.3841201Z Collecting botocore<1.36.0,>=1.35.33 2025-08-14T22:06:41.3876391Z Downloading botocore-1.35.99-py3-none-any.whl (13.3 MB) 2025-08-14T22:06:41.5571652Z Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /usr/lib/python3.9/site-packages (from boto3==1.35.33) (0.10.0) 2025-08-14T22:06:41.5969232Z Collecting s3transfer<0.11.0,>=0.10.0 2025-08-14T22:06:41.6004368Z Downloading s3transfer-0.10.4-py3-none-any.whl (83 kB) 2025-08-14T22:06:41.6488738Z Collecting nvidia-ml-py<13.0.0a0,>=12.0.0 2025-08-14T22:06:41.6525192Z Downloading nvidia_ml_py-12.575.51-py3-none-any.whl (47 kB) 2025-08-14T22:06:41.6623195Z Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/lib/python3.9/site-packages (from botocore<1.36.0,>=1.35.33->boto3==1.35.33) (2.8.1) 2025-08-14T22:06:41.6626222Z Requirement already satisfied: urllib3<1.27,>=1.25.4 in /usr/lib/python3.9/site-packages (from botocore<1.36.0,>=1.35.33->boto3==1.35.33) (1.25.10) 2025-08-14T22:06:41.7999384Z Requirement already satisfied: six>=1.5 in /usr/lib/python3.9/site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.36.0,>=1.35.33->boto3==1.35.33) (1.15.0) 2025-08-14T22:06:41.9446050Z Installing collected packages: botocore, s3transfer, nvidia-ml-py, pynvml, psutil, boto3 2025-08-14T22:06:42.4936132Z Attempting uninstall: nvidia-ml-py 2025-08-14T22:06:42.4945233Z Found existing installation: nvidia-ml-py 11.525.84 2025-08-14T22:06:42.4958156Z Uninstalling nvidia-ml-py-11.525.84: 2025-08-14T22:06:42.5168466Z Successfully uninstalled nvidia-ml-py-11.525.84 2025-08-14T22:06:42.5857041Z Attempting uninstall: psutil 2025-08-14T22:06:42.5857376Z Found existing installation: psutil 5.9.8 2025-08-14T22:06:42.5932059Z Uninstalling psutil-5.9.8: 2025-08-14T22:06:42.5932341Z Successfully uninstalled psutil-5.9.8 2025-08-14T22:06:42.7827885Z Successfully installed boto3-1.35.33 botocore-1.35.99 nvidia-ml-py-12.575.51 psutil-7.0.0 pynvml-12.0.0 s3transfer-0.10.4 2025-08-14T22:06:42.9176517Z + DEVICE_NAME= 2025-08-14T22:06:42.9176761Z + DEVICE_TYPE= 2025-08-14T22:06:42.9176975Z + command -v nvidia-smi 2025-08-14T22:06:42.9177222Z + command -v rocminfo 2025-08-14T22:06:42.9177444Z + echo DEVICE_NAME= 2025-08-14T22:06:42.9177722Z + echo DEVICE_TYPE= 2025-08-14T22:06:42.9196430Z ##[group]Run set -eux 2025-08-14T22:06:42.9196659Z set -eux 2025-08-14T22:06:42.9196846Z  2025-08-14T22:06:42.9197046Z if [[ -z "${GITHUB_TOKEN}" ]]; then 2025-08-14T22:06:42.9197332Z  echo "Missing github-token input" 2025-08-14T22:06:42.9197572Z  exit 1 2025-08-14T22:06:42.9197768Z fi 2025-08-14T22:06:42.9211980Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:42.9212274Z env: 2025-08-14T22:06:42.9212461Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:42.9212802Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:42.9213183Z DEVICE_NAME: 2025-08-14T22:06:42.9213366Z DEVICE_TYPE: 2025-08-14T22:06:42.9213755Z GITHUB_TOKEN: *** 2025-08-14T22:06:42.9213949Z ##[endgroup] 2025-08-14T22:06:42.9240024Z + [[ -z *** ]] 2025-08-14T22:06:42.9282811Z ##[group]Run pytorch/test-infra/.github/actions/get-workflow-job-id@main 2025-08-14T22:06:42.9283135Z with: 2025-08-14T22:06:42.9283449Z github-token: *** 2025-08-14T22:06:42.9283724Z env: 2025-08-14T22:06:42.9283895Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:42.9284240Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:42.9284613Z DEVICE_NAME: 2025-08-14T22:06:42.9284789Z DEVICE_TYPE: 2025-08-14T22:06:42.9284990Z ##[endgroup] 2025-08-14T22:06:42.9297126Z ##[group]Run set -eux 2025-08-14T22:06:42.9297346Z set -eux 2025-08-14T22:06:42.9297535Z  2025-08-14T22:06:42.9297908Z python3 "${GITHUB_ACTION_PATH}/../../scripts/get_workflow_job_id.py" "${GITHUB_RUN_ID}" "${RUNNER_NAME}" 2025-08-14T22:06:42.9311873Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:42.9312227Z env: 2025-08-14T22:06:42.9312418Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:42.9312809Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:42.9313181Z DEVICE_NAME: 2025-08-14T22:06:42.9313366Z DEVICE_TYPE: 2025-08-14T22:06:42.9313676Z GITHUB_TOKEN: *** 2025-08-14T22:06:42.9313869Z ##[endgroup] 2025-08-14T22:06:42.9354509Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/get-workflow-job-id/../../scripts/get_workflow_job_id.py 16976338999 i-0019fc24284416ca3 2025-08-14T22:06:44.3201639Z setting job-id=48128301923 2025-08-14T22:06:44.3202404Z setting job-name=linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T22:06:44.3312781Z ##[group]Run set -eux 2025-08-14T22:06:44.3313016Z set -eux 2025-08-14T22:06:44.3313199Z  2025-08-14T22:06:44.3313504Z python3 "${GITHUB_ACTION_PATH}/../../scripts/benchmarks/gather_metadata.py" \ 2025-08-14T22:06:44.3313895Z  --schema-version "${SCHEMA_VERSION}" \ 2025-08-14T22:06:44.3314166Z  --repo "${REPO}" \ 2025-08-14T22:06:44.3314406Z  --head-branch "${HEAD_BRANCH}" \ 2025-08-14T22:06:44.3314662Z  --head-sha "${HEAD_SHA}" \ 2025-08-14T22:06:44.3314924Z  --workflow-id "${WORKFLOW_RUN_ID}" \ 2025-08-14T22:06:44.3315199Z  --run-attempt "${RUN_ATTEMPT}" \ 2025-08-14T22:06:44.3315447Z  --job-id "${JOB_ID}" \ 2025-08-14T22:06:44.3315692Z  --job-name "${JOB_NAME}" 2025-08-14T22:06:44.3320982Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:44.3321400Z env: 2025-08-14T22:06:44.3321586Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:44.3321945Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:44.3322329Z DEVICE_NAME: 2025-08-14T22:06:44.3322522Z DEVICE_TYPE: 2025-08-14T22:06:44.3322701Z SCHEMA_VERSION: v3 2025-08-14T22:06:44.3322906Z REPO: pytorch/pytorch 2025-08-14T22:06:44.3323140Z HEAD_BRANCH: refs/heads/main 2025-08-14T22:06:44.3323423Z HEAD_SHA: 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T22:06:44.3327748Z WORKFLOW_RUN_ID: 16976338999 2025-08-14T22:06:44.3327967Z RUN_ATTEMPT: 1 2025-08-14T22:06:44.3328146Z JOB_ID: 48128301923 2025-08-14T22:06:44.3328661Z JOB_NAME: linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T22:06:44.3329195Z ##[endgroup] 2025-08-14T22:06:44.3358449Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/upload-benchmark-results/../../scripts/benchmarks/gather_metadata.py --schema-version v3 --repo pytorch/pytorch --head-branch refs/heads/main --head-sha 1fc683cf17c8c673044538d10266c00f92987be2 --workflow-id 16976338999 --run-attempt 1 --job-id 48128301923 --job-name 'linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2)' 2025-08-14T22:06:44.3710728Z ##[group]Run set -eux 2025-08-14T22:06:44.3710951Z set -eux 2025-08-14T22:06:44.3711142Z  2025-08-14T22:06:44.3711446Z python3 "${GITHUB_ACTION_PATH}/../../scripts/benchmarks/gather_runners_info.py" 2025-08-14T22:06:44.3721134Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:44.3721437Z env: 2025-08-14T22:06:44.3721617Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:44.3721986Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:44.3722358Z DEVICE_NAME: 2025-08-14T22:06:44.3722538Z DEVICE_TYPE: 2025-08-14T22:06:44.3722708Z ##[endgroup] 2025-08-14T22:06:44.3746999Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/upload-benchmark-results/../../scripts/benchmarks/gather_runners_info.py 2025-08-14T22:06:44.4213205Z INFO:root:Fail to import torch to get the device name 2025-08-14T22:06:44.4319204Z ##[group]Run set -eux 2025-08-14T22:06:44.4319424Z set -eux 2025-08-14T22:06:44.4319598Z  2025-08-14T22:06:44.4319807Z # TODO (huydhn): Implement this part 2025-08-14T22:06:44.4320111Z echo "dependencies={}" >> "${GITHUB_OUTPUT}" 2025-08-14T22:06:44.4325719Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:44.4326004Z env: 2025-08-14T22:06:44.4326190Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:44.4326662Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:44.4327030Z DEVICE_NAME: 2025-08-14T22:06:44.4327213Z DEVICE_TYPE: 2025-08-14T22:06:44.4327385Z ##[endgroup] 2025-08-14T22:06:44.4353168Z + echo 'dependencies={}' 2025-08-14T22:06:44.4377195Z ##[group]Run set -eux 2025-08-14T22:06:44.4377436Z set -eux 2025-08-14T22:06:44.4377620Z  2025-08-14T22:06:44.4377837Z if [[ ! -d "${BENCHMARK_RESULTS_DIR}" ]]; then 2025-08-14T22:06:44.4378172Z  echo "${BENCHMARK_RESULTS_DIR} does not exist, skipping" 2025-08-14T22:06:44.4378532Z  # We don't want the job to fail if the directory doesn't exist 2025-08-14T22:06:44.4378823Z  exit 0 2025-08-14T22:06:44.4379002Z fi 2025-08-14T22:06:44.4379162Z  2025-08-14T22:06:44.4379349Z if [[ "${DRY_RUN}" == "true" ]]; then 2025-08-14T22:06:44.4379710Z  python3 "${GITHUB_ACTION_PATH}/../../scripts/upload_benchmark_results.py" \ 2025-08-14T22:06:44.4380132Z  --benchmark-results-dir "${BENCHMARK_RESULTS_DIR}" \ 2025-08-14T22:06:44.4380448Z  --metadata "${BENCHMARK_METADATA}" \ 2025-08-14T22:06:44.4380711Z  --runners "${RUNNER_INFO}" \ 2025-08-14T22:06:44.4380979Z  --dependencies "${DEPENDENCIES}" \ 2025-08-14T22:06:44.4381224Z  --dry-run 2025-08-14T22:06:44.4381417Z else 2025-08-14T22:06:44.4381788Z  python3 "${GITHUB_ACTION_PATH}/../../scripts/upload_benchmark_results.py" \ 2025-08-14T22:06:44.4386398Z  --benchmark-results-dir "${BENCHMARK_RESULTS_DIR}" \ 2025-08-14T22:06:44.4386716Z  --metadata "${BENCHMARK_METADATA}" \ 2025-08-14T22:06:44.4386982Z  --runners "${RUNNER_INFO}" \ 2025-08-14T22:06:44.4387248Z  --dependencies "${DEPENDENCIES}" 2025-08-14T22:06:44.4387484Z fi 2025-08-14T22:06:44.4392178Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:44.4392466Z env: 2025-08-14T22:06:44.4392647Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:44.4392993Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:44.4393354Z DEVICE_NAME: 2025-08-14T22:06:44.4393528Z DEVICE_TYPE: 2025-08-14T22:06:44.4393733Z BENCHMARK_RESULTS_DIR: test/test-reports 2025-08-14T22:06:44.4393980Z DRY_RUN: false 2025-08-14T22:06:44.4395103Z BENCHMARK_METADATA: {"timestamp": 1755209204, "schema_version": "v3", "name": "linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2)", "repo": "pytorch/pytorch", "head_branch": "refs/heads/main", "head_sha": "1fc683cf17c8c673044538d10266c00f92987be2", "workflow_id": 16976338999, "run_attempt": 1, "job_id": 48128301923} 2025-08-14T22:06:44.4396728Z RUNNER_INFO: [{"cpu_info": "x86_64", "cpu_count": 40, "avail_mem_in_gb": 157, "extra_info": {"hostname": "ip-10-0-56-34.ec2.internal"}, "name": "", "type": ""}] 2025-08-14T22:06:44.4397189Z DEPENDENCIES: {} 2025-08-14T22:06:44.4397381Z ##[endgroup] 2025-08-14T22:06:44.4417160Z + [[ ! -d test/test-reports ]] 2025-08-14T22:06:44.4417425Z + [[ false == \t\r\u\e ]] 2025-08-14T22:06:44.4420202Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/upload-benchmark-results/../../scripts/upload_benchmark_results.py --benchmark-results-dir test/test-reports --metadata '{"timestamp": 1755209204, "schema_version": "v3", "name": "linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2)", "repo": "pytorch/pytorch", "head_branch": "refs/heads/main", "head_sha": "1fc683cf17c8c673044538d10266c00f92987be2", "workflow_id": 16976338999, "run_attempt": 1, "job_id": 48128301923}' --runners '[{"cpu_info": "x86_64", "cpu_count": 40, "avail_mem_in_gb": 157, "extra_info": {"hostname": "ip-10-0-56-34.ec2.internal"}, "name": "", "type": ""}]' --dependencies '{}' 2025-08-14T22:06:44.6044576Z INFO:root:Upload test/test-reports/inference_huggingface.json to s3://ossci-benchmarks/v3/pytorch/pytorch/16976338999/48128301923/inference_huggingface.json 2025-08-14T22:06:44.6463919Z INFO:botocore.credentials:Found credentials from IAM Role: gh-ci-github-action-runners-runner-role 2025-08-14T22:06:44.9128341Z ##[group]Run cat test/**/*_toprint.log || true 2025-08-14T22:06:44.9128688Z cat test/**/*_toprint.log || true 2025-08-14T22:06:44.9133961Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:44.9134259Z env: 2025-08-14T22:06:44.9134435Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:44.9134778Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:44.9135147Z DEVICE_NAME: 2025-08-14T22:06:44.9135336Z DEVICE_TYPE: 2025-08-14T22:06:44.9135514Z ##[endgroup] 2025-08-14T22:06:44.9237468Z cat: 'test/**/*_toprint.log': No such file or directory 2025-08-14T22:06:44.9277812Z ##[group]Run kill "$MONITOR_SCRIPT_PID" 2025-08-14T22:06:44.9278104Z kill "$MONITOR_SCRIPT_PID" 2025-08-14T22:06:44.9283215Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:44.9283504Z env: 2025-08-14T22:06:44.9283686Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:44.9284026Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:44.9284391Z DEVICE_NAME: 2025-08-14T22:06:44.9284572Z DEVICE_TYPE: 2025-08-14T22:06:44.9284763Z MONITOR_SCRIPT_PID: 49007 2025-08-14T22:06:44.9284967Z ##[endgroup] 2025-08-14T22:06:44.9441261Z Prepare all required actions 2025-08-14T22:06:44.9441742Z Getting action download info 2025-08-14T22:06:45.0993285Z Download action repository 'seemethere/upload-artifact-s3@v5' (SHA:baba72d0712b404f646cebe0730933554ebce96a) 2025-08-14T22:06:45.3053212Z Download action repository 'actions/upload-artifact@v4' (SHA:ea165f8d65b6e75b540449e92b4886f43607fa02) 2025-08-14T22:06:45.6715987Z ##[group]Run ./.github/actions/upload-test-artifacts 2025-08-14T22:06:45.6716258Z with: 2025-08-14T22:06:45.6716599Z file-suffix: test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923 2025-08-14T22:06:45.6716997Z s3-bucket: gha-artifacts 2025-08-14T22:06:45.6717198Z env: 2025-08-14T22:06:45.6717371Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:45.6717713Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:45.6718074Z DEVICE_NAME: 2025-08-14T22:06:45.6718259Z DEVICE_TYPE: 2025-08-14T22:06:45.6718432Z ##[endgroup] 2025-08-14T22:06:45.6745655Z ##[group]Run # Remove any previous test jsons if they exist 2025-08-14T22:06:45.6746089Z # Remove any previous test jsons if they exist 2025-08-14T22:06:45.6746425Z rm -f test-jsons-*.zip 2025-08-14T22:06:45.6746906Z zip -r "test-jsons-${FILE_SUFFIX}.zip" test/test-reports -i '*.json' 2025-08-14T22:06:45.6755026Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:45.6755310Z env: 2025-08-14T22:06:45.6755497Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:45.6755840Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:45.6756198Z DEVICE_NAME: 2025-08-14T22:06:45.6756382Z DEVICE_TYPE: 2025-08-14T22:06:45.6756726Z FILE_SUFFIX: test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923 2025-08-14T22:06:45.6757181Z ##[endgroup] 2025-08-14T22:06:45.6947901Z adding: test/test-reports/inference_huggingface.json (deflated 99%) 2025-08-14T22:06:45.6965782Z ##[group]Run # Remove any previous test reports if they exist 2025-08-14T22:06:45.6966153Z # Remove any previous test reports if they exist 2025-08-14T22:06:45.6966455Z rm -f test-reports-*.zip 2025-08-14T22:06:45.6966824Z zip -r "test-reports-${FILE_SUFFIX}.zip" test/test-reports -i '*.xml' -i '*.csv' 2025-08-14T22:06:45.6976097Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:45.6976389Z env: 2025-08-14T22:06:45.6976581Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:45.6976922Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:45.6977295Z DEVICE_NAME: 2025-08-14T22:06:45.6977478Z DEVICE_TYPE: 2025-08-14T22:06:45.6977814Z FILE_SUFFIX: test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923 2025-08-14T22:06:45.6978206Z ##[endgroup] 2025-08-14T22:06:45.7034391Z adding: test/test-reports/inference_huggingface.csv (deflated 69%) 2025-08-14T22:06:45.7035160Z adding: test/test-reports/inference_huggingface_graph_breaks.csv (deflated 85%) 2025-08-14T22:06:45.7035829Z adding: test/test-reports/inference_huggingface_graph_break_deduped.csv (deflated 64%) 2025-08-14T22:06:45.7054006Z ##[group]Run # Remove any previous usage logs if they exist 2025-08-14T22:06:45.7054372Z # Remove any previous usage logs if they exist 2025-08-14T22:06:45.7058863Z rm -f logs-*.zip 2025-08-14T22:06:45.7059147Z zip "logs-${FILE_SUFFIX}.zip" 'usage_log.txt' || true 2025-08-14T22:06:45.7059546Z zip -r "logs-${FILE_SUFFIX}.zip" test/test-reports -i '*.log' || true 2025-08-14T22:06:45.7064503Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:45.7064791Z env: 2025-08-14T22:06:45.7064971Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:45.7065313Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:45.7065684Z DEVICE_NAME: 2025-08-14T22:06:45.7065868Z DEVICE_TYPE: 2025-08-14T22:06:45.7066357Z FILE_SUFFIX: test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923 2025-08-14T22:06:45.7066742Z ##[endgroup] 2025-08-14T22:06:45.7177239Z adding: usage_log.txt (deflated 96%) 2025-08-14T22:06:45.7193889Z 2025-08-14T22:06:45.7194346Z zip error: Nothing to do! (logs-test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923.zip) 2025-08-14T22:06:45.7214001Z ##[group]Run # Remove any previous debugging artifacts if they exist 2025-08-14T22:06:45.7214573Z # Remove any previous debugging artifacts if they exist 2025-08-14T22:06:45.7214877Z rm -f debug-*.zip 2025-08-14T22:06:45.7215103Z if [ -d 'test/debug' ]; then 2025-08-14T22:06:45.7215386Z  zip -r "debug-${FILE_SUFFIX}.zip" test/debug 2025-08-14T22:06:45.7215655Z fi 2025-08-14T22:06:45.7220285Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:45.7220576Z env: 2025-08-14T22:06:45.7220765Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:45.7221101Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:45.7221468Z DEVICE_NAME: 2025-08-14T22:06:45.7221654Z DEVICE_TYPE: 2025-08-14T22:06:45.7222071Z FILE_SUFFIX: test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923 2025-08-14T22:06:45.7222455Z ##[endgroup] 2025-08-14T22:06:45.7332094Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-08-14T22:06:45.7332360Z with: 2025-08-14T22:06:45.7332559Z s3-bucket: gha-artifacts 2025-08-14T22:06:45.7332815Z s3-prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:06:45.7333094Z retention-days: 14 2025-08-14T22:06:45.7333304Z if-no-files-found: warn 2025-08-14T22:06:45.7333517Z path: test-jsons-*.zip 2025-08-14T22:06:45.7333798Z name: artifact 2025-08-14T22:06:45.7333982Z region: us-east-1 2025-08-14T22:06:45.7334156Z env: 2025-08-14T22:06:45.7334333Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:45.7334685Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:45.7335049Z DEVICE_NAME: 2025-08-14T22:06:45.7335232Z DEVICE_TYPE: 2025-08-14T22:06:45.7335406Z ##[endgroup] 2025-08-14T22:06:46.1012347Z NOTE: s3-prefix specified, ignoring name parameter 2025-08-14T22:06:46.1012725Z With the provided path, there will be 1 file uploaded 2025-08-14T22:06:46.1015280Z Uploading to s3 prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:06:46.1070485Z Starting upload of test-jsons-test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923.zip 2025-08-14T22:06:46.2328944Z Finished upload of test-jsons-test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923.zip 2025-08-14T22:06:46.2491709Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-08-14T22:06:46.2492059Z with: 2025-08-14T22:06:46.2496402Z s3-bucket: gha-artifacts 2025-08-14T22:06:46.2496665Z s3-prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:06:46.2496949Z retention-days: 14 2025-08-14T22:06:46.2497157Z if-no-files-found: error 2025-08-14T22:06:46.2497377Z path: test-reports-*.zip 2025-08-14T22:06:46.2497581Z name: artifact 2025-08-14T22:06:46.2497852Z region: us-east-1 2025-08-14T22:06:46.2498041Z env: 2025-08-14T22:06:46.2498250Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:46.2498619Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:46.2498983Z DEVICE_NAME: 2025-08-14T22:06:46.2499163Z DEVICE_TYPE: 2025-08-14T22:06:46.2499334Z ##[endgroup] 2025-08-14T22:06:46.6011588Z NOTE: s3-prefix specified, ignoring name parameter 2025-08-14T22:06:46.6012032Z With the provided path, there will be 1 file uploaded 2025-08-14T22:06:46.6012458Z Uploading to s3 prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:06:46.6065250Z Starting upload of test-reports-test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923.zip 2025-08-14T22:06:46.7543539Z Finished upload of test-reports-test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923.zip 2025-08-14T22:06:46.7707950Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-08-14T22:06:46.7708206Z with: 2025-08-14T22:06:46.7708391Z s3-bucket: gha-artifacts 2025-08-14T22:06:46.7708658Z s3-prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:06:46.7708918Z retention-days: 14 2025-08-14T22:06:46.7709122Z if-no-files-found: ignore 2025-08-14T22:06:46.7709338Z path: logs-*.zip 2025-08-14T22:06:46.7709515Z name: artifact 2025-08-14T22:06:46.7709700Z region: us-east-1 2025-08-14T22:06:46.7709876Z env: 2025-08-14T22:06:46.7710040Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:46.7710386Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:46.7710763Z DEVICE_NAME: 2025-08-14T22:06:46.7710947Z DEVICE_TYPE: 2025-08-14T22:06:46.7711114Z ##[endgroup] 2025-08-14T22:06:47.1270170Z NOTE: s3-prefix specified, ignoring name parameter 2025-08-14T22:06:47.1270661Z With the provided path, there will be 1 file uploaded 2025-08-14T22:06:47.1271064Z Uploading to s3 prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:06:47.1331877Z Starting upload of logs-test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923.zip 2025-08-14T22:06:47.3129110Z Finished upload of logs-test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923.zip 2025-08-14T22:06:47.3343206Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-08-14T22:06:47.3343475Z with: 2025-08-14T22:06:47.3343665Z s3-bucket: gha-artifacts 2025-08-14T22:06:47.3343925Z s3-prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:06:47.3344191Z retention-days: 14 2025-08-14T22:06:47.3344390Z if-no-files-found: ignore 2025-08-14T22:06:47.3344605Z path: debug-*.zip 2025-08-14T22:06:47.3344880Z name: artifact 2025-08-14T22:06:47.3345062Z region: us-east-1 2025-08-14T22:06:47.3345248Z env: 2025-08-14T22:06:47.3345418Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:47.3345782Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:47.3346156Z DEVICE_NAME: 2025-08-14T22:06:47.3346337Z DEVICE_TYPE: 2025-08-14T22:06:47.3346508Z ##[endgroup] 2025-08-14T22:06:47.6857268Z No files were found with the provided path: debug-*.zip. No artifacts will be uploaded. 2025-08-14T22:06:47.7029892Z ##[group]Run # shellcheck disable=SC2156 2025-08-14T22:06:47.7030201Z # shellcheck disable=SC2156 2025-08-14T22:06:47.7030644Z find . -iname "core.[1-9]*" -exec docker exec "${DOCKER_CONTAINER_ID}" sh -c "gdb python {} -ex 'bt' -ex 'q'" \; 2025-08-14T22:06:47.7036304Z shell: /usr/bin/bash -e {0} 2025-08-14T22:06:47.7036520Z env: 2025-08-14T22:06:47.7036693Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:47.7037051Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:47.7037423Z DEVICE_NAME: 2025-08-14T22:06:47.7037608Z DEVICE_TYPE: 2025-08-14T22:06:47.7037778Z ##[endgroup] 2025-08-14T22:06:48.0362454Z Prepare all required actions 2025-08-14T22:06:48.0362788Z Getting action download info 2025-08-14T22:06:48.1417532Z ##[group]Run ./.github/actions/upload-utilization-stats 2025-08-14T22:06:48.1417832Z with: 2025-08-14T22:06:48.1418023Z job_id: 48128301923 2025-08-14T22:06:48.1418541Z job_name: linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T22:06:48.1419160Z workflow_name: inductor-periodic 2025-08-14T22:06:48.1419402Z workflow_run_id: 16976338999 2025-08-14T22:06:48.1419616Z workflow_attempt: 1 2025-08-14T22:06:48.1419803Z env: 2025-08-14T22:06:48.1419970Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:48.1420312Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:48.1420672Z DEVICE_NAME: 2025-08-14T22:06:48.1420847Z DEVICE_TYPE: 2025-08-14T22:06:48.1421019Z ##[endgroup] 2025-08-14T22:06:48.1438951Z ##[group]Run echo "workflow_id: 16976338999" 2025-08-14T22:06:48.1439238Z echo "workflow_id: 16976338999" 2025-08-14T22:06:48.1439497Z echo "workflow_attempt: 1" 2025-08-14T22:06:48.1439769Z echo "workflow_Name: inductor-periodic" 2025-08-14T22:06:48.1440040Z echo "job_id: 48128301923" 2025-08-14T22:06:48.1440603Z echo "job_name: linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2)" 2025-08-14T22:06:48.1441313Z echo "artifact_prefix: " 2025-08-14T22:06:48.1441558Z python3 --version 2025-08-14T22:06:48.1446933Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:48.1447219Z env: 2025-08-14T22:06:48.1447395Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:48.1447777Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:48.1448142Z DEVICE_NAME: 2025-08-14T22:06:48.1448320Z DEVICE_TYPE: 2025-08-14T22:06:48.1448504Z ##[endgroup] 2025-08-14T22:06:48.1471578Z workflow_id: 16976338999 2025-08-14T22:06:48.1471827Z workflow_attempt: 1 2025-08-14T22:06:48.1472053Z workflow_Name: inductor-periodic 2025-08-14T22:06:48.1472395Z job_id: 48128301923 2025-08-14T22:06:48.1473196Z job_name: linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2) 2025-08-14T22:06:48.1473888Z artifact_prefix: 2025-08-14T22:06:48.1492160Z Python 3.9.23 2025-08-14T22:06:48.1537963Z ##[group]Run nick-fields/retry@v3.0.0 2025-08-14T22:06:48.1538219Z with: 2025-08-14T22:06:48.1538390Z shell: bash 2025-08-14T22:06:48.1538580Z timeout_minutes: 5 2025-08-14T22:06:48.1538772Z max_attempts: 5 2025-08-14T22:06:48.1538961Z retry_wait_seconds: 30 2025-08-14T22:06:48.1539443Z command: set -eu python3 -m pip install python-dateutil==2.8.2 boto3==1.35.42 pandas==2.1.3 dataclasses_json==0.6.7 2025-08-14T22:06:48.1539880Z polling_interval_seconds: 1 2025-08-14T22:06:48.1540108Z warning_on_retry: true 2025-08-14T22:06:48.1540320Z continue_on_error: false 2025-08-14T22:06:48.1540528Z env: 2025-08-14T22:06:48.1540701Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:48.1541033Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:48.1541399Z DEVICE_NAME: 2025-08-14T22:06:48.1541576Z DEVICE_TYPE: 2025-08-14T22:06:48.1541743Z ##[endgroup] 2025-08-14T22:06:48.5189878Z Defaulting to user installation because normal site-packages is not writeable 2025-08-14T22:06:48.5965974Z Collecting python-dateutil==2.8.2 2025-08-14T22:06:48.6262665Z Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) 2025-08-14T22:06:49.6673956Z Collecting boto3==1.35.42 2025-08-14T22:06:49.6707342Z Downloading boto3-1.35.42-py3-none-any.whl (139 kB) 2025-08-14T22:06:50.2172925Z Collecting pandas==2.1.3 2025-08-14T22:06:50.2205254Z Downloading pandas-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.3 MB) 2025-08-14T22:06:50.3769659Z Requirement already satisfied: dataclasses_json==0.6.7 in /home/ec2-user/.local/lib/python3.9/site-packages (0.6.7) 2025-08-14T22:06:50.3784056Z Requirement already satisfied: six>=1.5 in /usr/lib/python3.9/site-packages (from python-dateutil==2.8.2) (1.15.0) 2025-08-14T22:06:50.3836891Z Requirement already satisfied: botocore<1.36.0,>=1.35.42 in /home/ec2-user/.local/lib/python3.9/site-packages (from boto3==1.35.42) (1.35.99) 2025-08-14T22:06:50.3838225Z Requirement already satisfied: s3transfer<0.11.0,>=0.10.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from boto3==1.35.42) (0.10.4) 2025-08-14T22:06:50.3841185Z Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /usr/lib/python3.9/site-packages (from boto3==1.35.42) (0.10.0) 2025-08-14T22:06:50.4816129Z Collecting tzdata>=2022.1 2025-08-14T22:06:50.4855875Z Downloading tzdata-2025.2-py2.py3-none-any.whl (347 kB) 2025-08-14T22:06:51.3730195Z Collecting numpy<2,>=1.22.4 2025-08-14T22:06:51.3772437Z Downloading numpy-1.26.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB) 2025-08-14T22:06:51.6238984Z Requirement already satisfied: pytz>=2020.1 in /usr/lib/python3.9/site-packages (from pandas==2.1.3) (2022.7.1) 2025-08-14T22:06:51.6271023Z Requirement already satisfied: typing-inspect<1,>=0.4.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from dataclasses_json==0.6.7) (0.9.0) 2025-08-14T22:06:51.6283131Z Requirement already satisfied: marshmallow<4.0.0,>=3.18.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from dataclasses_json==0.6.7) (3.26.1) 2025-08-14T22:06:51.6338979Z Requirement already satisfied: urllib3<1.27,>=1.25.4 in /usr/lib/python3.9/site-packages (from botocore<1.36.0,>=1.35.42->boto3==1.35.42) (1.25.10) 2025-08-14T22:06:51.6460108Z Requirement already satisfied: packaging>=17.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from marshmallow<4.0.0,>=3.18.0->dataclasses_json==0.6.7) (25.0) 2025-08-14T22:06:51.6570709Z Requirement already satisfied: mypy-extensions>=0.3.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from typing-inspect<1,>=0.4.0->dataclasses_json==0.6.7) (1.1.0) 2025-08-14T22:06:51.6574049Z Requirement already satisfied: typing-extensions>=3.7.4 in /home/ec2-user/.local/lib/python3.9/site-packages (from typing-inspect<1,>=0.4.0->dataclasses_json==0.6.7) (4.14.1) 2025-08-14T22:06:51.8294123Z Installing collected packages: python-dateutil, tzdata, numpy, pandas, boto3 2025-08-14T22:06:57.4154220Z Attempting uninstall: boto3 2025-08-14T22:06:57.4154869Z Found existing installation: boto3 1.35.33 2025-08-14T22:06:57.4267531Z Uninstalling boto3-1.35.33: 2025-08-14T22:06:57.4279196Z Successfully uninstalled boto3-1.35.33 2025-08-14T22:06:57.4888388Z Successfully installed boto3-1.35.42 numpy-1.26.4 pandas-2.1.3 python-dateutil-2.8.2 tzdata-2025.2 2025-08-14T22:06:58.2487028Z Command completed after 1 attempt(s). 2025-08-14T22:06:58.2535706Z ##[group]Run python3 -m tools.stats.upload_utilization_stats.upload_utilization_stats \ 2025-08-14T22:06:58.2536250Z python3 -m tools.stats.upload_utilization_stats.upload_utilization_stats \ 2025-08-14T22:06:58.2536634Z  --workflow-run-id "16976338999" \ 2025-08-14T22:06:58.2536911Z  --workflow-name "inductor-periodic" \ 2025-08-14T22:06:58.2537200Z  --workflow-run-attempt "1" \ 2025-08-14T22:06:58.2537455Z  --job-id "48128301923" \ 2025-08-14T22:06:58.2542256Z  --job-name "linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2)" \ 2025-08-14T22:06:58.2542828Z  --local-path "" \ 2025-08-14T22:06:58.2543059Z  --artifact-prefix "" 2025-08-14T22:06:58.2548422Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:58.2549132Z env: 2025-08-14T22:06:58.2549345Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:58.2549702Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:58.2550259Z DEVICE_NAME: 2025-08-14T22:06:58.2550460Z DEVICE_TYPE: 2025-08-14T22:06:58.2550650Z ##[endgroup] 2025-08-14T22:06:59.5033953Z repo: pytorch/pytorch 2025-08-14T22:06:59.5034320Z Search for test log in s3 bucket: ossci-utilization 2025-08-14T22:06:59.5034908Z Downloading logs-test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923.zip 2025-08-14T22:06:59.5035592Z extracting usage_log.txt from zip file logs-test-cpu_inductor_freezing_avx2_huggingface-1-1-linux.10xlarge.avx2_48128301923.zip 2025-08-14T22:06:59.5036109Z Converted Log Model: UtilizationMetadata: 2025-08-14T22:06:59.5043862Z UtilizationMetadata(level='metadata', workflow_id='16976338999', job_id='48128301923', workflow_name='inductor-periodic', job_name='linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2)', usage_collect_interval=1.0, data_model_version=1.5, start_at=1755207139, gpu_count=0, cpu_count=40, gpu_type=None, error=None) 2025-08-14T22:06:59.5045124Z [Db Segments] detected pytest cmd: 9, generated segments: 9 2025-08-14T22:06:59.5045436Z [db model] Peek db timeseries 2025-08-14T22:06:59.5045657Z :{ 2025-08-14T22:06:59.5045835Z "created_at": 1755209219, 2025-08-14T22:06:59.5046060Z "type": "utilization", 2025-08-14T22:06:59.5046268Z "tags": [ 2025-08-14T22:06:59.5046454Z "record" 2025-08-14T22:06:59.5046623Z ], 2025-08-14T22:06:59.5046794Z "time_stamp": 1755207139, 2025-08-14T22:06:59.5047018Z "repo": "pytorch/pytorch", 2025-08-14T22:06:59.5047232Z "workflow_id": 16976338999, 2025-08-14T22:06:59.5047444Z "run_attempt": 1, 2025-08-14T22:06:59.5047644Z "job_id": 48128301923, 2025-08-14T22:06:59.5047863Z "workflow_name": "inductor-periodic", 2025-08-14T22:06:59.5048437Z "job_name": "linux-jammy-cpu-py3.9-gcc11-periodic-dynamo-benchmarks / test (cpu_inductor_freezing_avx2_huggingface, 1, 1, linux.10xlarge.avx2)", 2025-08-14T22:06:59.5049378Z "json_data": "{}" 2025-08-14T22:06:59.5049571Z } 2025-08-14T22:06:59.5049958Z Writing 1 documents to S3 ossci-utilization/util_metadata/v_1.5/pytorch/pytorch/16976338999/1/48128301923/metadata 2025-08-14T22:06:59.5050633Z Done! Finish writing document to S3 ossci-utilization/util_metadata/v_1.5/pytorch/pytorch/16976338999/1/48128301923/metadata 2025-08-14T22:06:59.5051742Z Writing 406 documents to S3 ossci-utilization/util_timeseries/v_1.5/pytorch/pytorch/16976338999/1/48128301923/time_series 2025-08-14T22:06:59.5052435Z Done! Finish writing document to S3 ossci-utilization/util_timeseries/v_1.5/pytorch/pytorch/16976338999/1/48128301923/time_series 2025-08-14T22:06:59.6166661Z ##[group]Run pytorch/test-infra/.github/actions/teardown-linux@main 2025-08-14T22:06:59.6166996Z with: 2025-08-14T22:06:59.6167168Z env: 2025-08-14T22:06:59.6167405Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:59.6167795Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:59.6168307Z DEVICE_NAME: 2025-08-14T22:06:59.6168483Z DEVICE_TYPE: 2025-08-14T22:06:59.6168666Z ##[endgroup] 2025-08-14T22:06:59.6181561Z ##[group]Run set -eou pipefail 2025-08-14T22:06:59.6181824Z set -eou pipefail 2025-08-14T22:06:59.6182042Z  2025-08-14T22:06:59.6182408Z echo "Holding runner for 2 hours until all ssh sessions have logged out" 2025-08-14T22:06:59.6186767Z for _ in $(seq 1440); do 2025-08-14T22:06:59.6187036Z  # Break if no ssh session exists anymore 2025-08-14T22:06:59.6187318Z  if [ "$(who)" = "" ]; then 2025-08-14T22:06:59.6187546Z  break 2025-08-14T22:06:59.6187777Z  fi 2025-08-14T22:06:59.6187954Z  echo "." 2025-08-14T22:06:59.6188149Z  sleep 5 2025-08-14T22:06:59.6188335Z done 2025-08-14T22:06:59.6193555Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:59.6193855Z env: 2025-08-14T22:06:59.6194030Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:59.6194373Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:59.6194741Z DEVICE_NAME: 2025-08-14T22:06:59.6194926Z DEVICE_TYPE: 2025-08-14T22:06:59.6195158Z ##[endgroup] 2025-08-14T22:06:59.6217742Z Holding runner for 2 hours until all ssh sessions have logged out 2025-08-14T22:06:59.6306769Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2025-08-14T22:06:59.6307291Z # ignore expansion of "docker ps -q" since it could be empty 2025-08-14T22:06:59.6307619Z # shellcheck disable=SC2046 2025-08-14T22:06:59.6307903Z docker stop $(docker ps -q) || true 2025-08-14T22:06:59.6308170Z # Prune all of the docker images 2025-08-14T22:06:59.6308431Z docker system prune -af 2025-08-14T22:06:59.6319918Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:06:59.6320271Z env: 2025-08-14T22:06:59.6320470Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:06:59.6320909Z DOCKER_CONTAINER_ID: 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:06:59.6321485Z DEVICE_NAME: 2025-08-14T22:06:59.6321694Z DEVICE_TYPE: 2025-08-14T22:06:59.6321887Z ##[endgroup] 2025-08-14T22:07:10.7388261Z 047dfac93b61 2025-08-14T22:07:11.1861363Z Deleted Containers: 2025-08-14T22:07:11.1861796Z 047dfac93b6128065200280308a926fb6262842693943f4b8cd54d023af5c2f2 2025-08-14T22:07:11.1862137Z 2025-08-14T22:07:22.3489846Z Deleted Images: 2025-08-14T22:07:22.3490681Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T22:07:22.3491647Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image@sha256:4236794baba289041d240d08fd393bbd57497c3012e5e0ccd9fd98f61ebf35c6 2025-08-14T22:07:22.3492304Z deleted: sha256:0899ae453036ee7a91795ea95b1db61000579eeb74b140edab5976919ee64bbe 2025-08-14T22:07:22.3492786Z deleted: sha256:aa7b544271e9ba3105dabd1afb12e315887018f3471e03135c1d50e64cc550c4 2025-08-14T22:07:22.3493253Z deleted: sha256:4c685831817cc2fc6dfdfda1726df1f402222d8cdccc40daad3198cf8b17e3f4 2025-08-14T22:07:22.3493734Z deleted: sha256:cedf3fb09a62e68c6d7e22cedbce12e77166a50649d0269200ee0efce8a57b88 2025-08-14T22:07:22.3494378Z deleted: sha256:1b3a9a237b4153f8f523a85cead9d36e29717eb57182e2f75069788681627d95 2025-08-14T22:07:22.3494847Z deleted: sha256:67bd313103dfbe7fe0172e6f4f7ee420fad9743a64a1cc1cd20bc22250d3602c 2025-08-14T22:07:22.3495314Z deleted: sha256:b17820137ada46a2a726c67aa08cce73d2ead7c95db08575cf5e69bedb4b600d 2025-08-14T22:07:22.3495928Z deleted: sha256:b16c9bc40cc1cf924638323aece4168d6332cfae212dad2a431a584a44fe967c 2025-08-14T22:07:22.3496410Z deleted: sha256:ab35ed781133eb4aaa1b2478aea73fb80dc71bceffbe474b55e1a60fc6c5ffbe 2025-08-14T22:07:22.3496933Z deleted: sha256:b9d0b0720dd9c0bcb4f174ae6770a7c2fe540c6983872180f3a5e18300434cdb 2025-08-14T22:07:22.3497397Z deleted: sha256:f5d1a4f32d90030cc174d73b579758d28f95c992a8cf21360e5addee99dea169 2025-08-14T22:07:22.3497867Z deleted: sha256:4af408141f8591f4b69cef9b425b6caa3c4cbc62ced38b5d08f3150f0c8ff449 2025-08-14T22:07:22.3498342Z deleted: sha256:e0019e5c461051e54a9af37ae22b49cfd2c2e5366da57a20304f6ef89171a3b3 2025-08-14T22:07:22.3503082Z deleted: sha256:542f999b2cfc965b97861645356840864e9946fa2fa40f1f5c4c45684e91c239 2025-08-14T22:07:22.3503587Z deleted: sha256:633629aa3d4ae6472e222a1c0b2ceb729b0d84ccb48e12d52ba2d2987c9063e1 2025-08-14T22:07:22.3504061Z deleted: sha256:ea645aba1ba54baac43713f3df7f1b89dd119764a747273897eb2931fea42856 2025-08-14T22:07:22.3504532Z deleted: sha256:1f50e367efff88c7182b9dc3ff618c1cf7bd34edf2f31805e268c50fac02a627 2025-08-14T22:07:22.3505012Z deleted: sha256:aff22d7ae43d842befa617e2e5f9878d09a82b67c362b0c44a40a4c88be92120 2025-08-14T22:07:22.3505481Z deleted: sha256:4275d4addb77b473ed40194e42918cf2aeb484d1d8e25cf54d374392643a095c 2025-08-14T22:07:22.3505944Z deleted: sha256:66471f6c8dc869455ff193909110d824b5d65f7383877a7d0face6331b21fff3 2025-08-14T22:07:22.3506391Z deleted: sha256:8cfd2d55570494ff2b993725f5eb13d0440a5698fa905823ca1677d2d16febb8 2025-08-14T22:07:22.3506858Z deleted: sha256:5c8cf8b9c4a76f679994decc8800bc6eefd258a8dc6293a714d5e100fea3a1bc 2025-08-14T22:07:22.3507337Z deleted: sha256:1acc162c6b9de62d13ce7fd33bb9b134458f7e7dbe996e5442e0047ec8f70c80 2025-08-14T22:07:22.3507870Z deleted: sha256:044bab98f3bceb1948c626ce6bdd19d3ec8f9c5ad42a4f635dd685a7ae9c9024 2025-08-14T22:07:22.3508341Z deleted: sha256:2acb11a9448f13c2c2d29c4d0d4013e046862bd019cf5ec9fe04bdf35299f1dd 2025-08-14T22:07:22.3508812Z deleted: sha256:8e7b56334416233f301944000dec16952e13bb69296cc80e1031bfecaf6e7f9d 2025-08-14T22:07:22.3509285Z deleted: sha256:4a4d1ec727c43389a601aefccdaeff6b3bf54c0daefb12e0c2098c3e18b383ba 2025-08-14T22:07:22.3509747Z deleted: sha256:8b9ca4276331196a2f03c2fa3a87422d2042cf06011b49368c2335be7da829c1 2025-08-14T22:07:22.3510205Z deleted: sha256:5076357fd3cc8b06ed54a0f692362a38f1ebafa4843c0b0bf8021f9021d2e583 2025-08-14T22:07:22.3510672Z deleted: sha256:f9451fa0842798e2a67c059fda5124cafb401801bb8c40d03ae736ff3ef5ed20 2025-08-14T22:07:22.3511128Z deleted: sha256:52b716f02091d6af6b79e7b2e1f5bbd7391235993d415c7a852d6752220c8b65 2025-08-14T22:07:22.3511570Z deleted: sha256:748225161c361d3779c96eb7ae5ea0c33d35311f9445c371d62616b98e3426e8 2025-08-14T22:07:22.3512029Z deleted: sha256:5eeda1478a46d8d58267e8917422eb0a182a40c8bdfb4bfe0869923f8114c770 2025-08-14T22:07:22.3512499Z deleted: sha256:66d4cebb04304f556dd191b425a876f7dbbcde8c3c647af4ef47c10804e51f5a 2025-08-14T22:07:22.3512946Z deleted: sha256:0b526447174d22890be2bc866228e40989483b1102a0430b4ab3ad16dc6c7787 2025-08-14T22:07:22.3513487Z deleted: sha256:1aa31d55f8f9bb51f1eb702ba7d46ceda8290ed90e8e8cf299bb8a9179bf2ae2 2025-08-14T22:07:22.3514018Z deleted: sha256:dd1f47c8dc7518f303a91fc8aae81a512caff53987d5a89a378bb24c1c6d7707 2025-08-14T22:07:22.3514490Z deleted: sha256:d60f9527fcb284e73795a37d4f536badd451a2eade4c9314ebe549d31efcc876 2025-08-14T22:07:22.3514940Z deleted: sha256:f23ad0355704751b0f71a8900169354e3bf23a7b3f5fa2cd9b2478a561bfbb45 2025-08-14T22:07:22.3515505Z deleted: sha256:10e7acf6460743fcad0c1fff0bbd01158fbeb88151621c1e15ae5994f1c8ef55 2025-08-14T22:07:22.3515967Z deleted: sha256:f674e3067e97f1407f4cd55202d4c0c8641f02811550e65a00a875fc19354b75 2025-08-14T22:07:22.3516479Z deleted: sha256:8a9c75c896425ccd25101f0cf39316bec7779111954f44df726842bf583e907b 2025-08-14T22:07:22.3516944Z deleted: sha256:9730d30edfcaa135287479d80f1720b39c6f728228df6d0eb7f095e917cc16b6 2025-08-14T22:07:22.3517404Z deleted: sha256:2787e13cf97e870ca65312526c3000163ebf3da20fe59e5f5d53b1aeb4fb424b 2025-08-14T22:07:22.3517946Z deleted: sha256:d61197909174795bd69f8d5f534f1b086065d36b7aa6c5a50744eca6f8d6b12b 2025-08-14T22:07:22.3518421Z deleted: sha256:ecdfbb81e95b2ae2c8e9ab4ca72ba8564095caabb0512a47da8f866923f71bff 2025-08-14T22:07:22.3518936Z deleted: sha256:cd2d7c644df243742a0c0349af0d37570c06fdd1711ddc367e79514757a6d5cc 2025-08-14T22:07:22.3519430Z deleted: sha256:6703ab1ced70b30a87660c0dd778fe95fb90b04ed8461c2a331272aa54eb3499 2025-08-14T22:07:22.3519891Z deleted: sha256:b7088ce49d7df1d6fb18eee5fc5664e637c5649c89e581d972c76a83f60d0a62 2025-08-14T22:07:22.3520362Z deleted: sha256:d0d2786658af9907d8c4ecfa84fa9e2bd07131257264395b804deef744a5c39c 2025-08-14T22:07:22.3520831Z deleted: sha256:d46baf72d8e570e6004c6f95131cea6ede27eb01c213d8c1e8b263ab95fdfe95 2025-08-14T22:07:22.3521380Z deleted: sha256:0219ea0bd0e38d169ed596ed80807b0f70b609ec5f886d671c249d10575dff2c 2025-08-14T22:07:22.3521835Z deleted: sha256:77d1a1f15cf8ae85a4c5495d800378c307967004360814810fd13b07a74aee5e 2025-08-14T22:07:22.3522291Z deleted: sha256:47c77d89ce8782a94a6f5435b1611a76b47f830153ba4b462d3e08dcbdaa40f7 2025-08-14T22:07:22.3522764Z deleted: sha256:d5120b2e61fb0ccc32a2ad02fc0b2b908bc69f1f174268bde3d26d79ce46f046 2025-08-14T22:07:22.3523222Z deleted: sha256:65626052fd7e03a8e90c72072a54f0eaa43788cfcb0835ffb98b700be89b0567 2025-08-14T22:07:22.3523677Z deleted: sha256:05c09c0832c35f0128e0258b1d3069d7bb4b94ce58239faba5d585e49c34e904 2025-08-14T22:07:22.3524134Z deleted: sha256:2d6749fb2c30585eebb1d97e99318434ec34e0f7a4414e552fd4a44175f86839 2025-08-14T22:07:22.3524599Z deleted: sha256:2d65e2932810021e5b3cfedd89cfd851dd47fce63fbe5dc6959e59f3d8a98499 2025-08-14T22:07:22.3525078Z deleted: sha256:b2e71ddacad35b6caa3a77429bab51b654f6acaccc9e9263f1cb43edb8c53ac3 2025-08-14T22:07:22.3525550Z deleted: sha256:632a43100a629c40972b4da95fbbb581f29fe8b073a96386c72931d27ffbbefa 2025-08-14T22:07:22.3526011Z deleted: sha256:11964e5f5833fdf2bcc61c52f33d5aebf9b5504c6792baf58beb96b90398d10a 2025-08-14T22:07:22.3526474Z deleted: sha256:f0c1cb4c9e4655464b9b62b6589ac5005c2392213765ab4175bd61e3f6462643 2025-08-14T22:07:22.3526943Z deleted: sha256:5113aaee4b4d5ee45b58bcee467ac314112b02e4c4e5e9c3cc7a236dd308e9de 2025-08-14T22:07:22.3527414Z deleted: sha256:9cdc88c7b7fe728e15c72d0e8eef813ace31905b4b317a0a23f1334b6a22e604 2025-08-14T22:07:22.3536327Z deleted: sha256:8056a3da01752a91095e2d0afd80b625172f0915f22f7d998b9b926b9462dc5f 2025-08-14T22:07:22.3536924Z deleted: sha256:8a99968112e0edd39c242f3452b05d167911724468fdd9b18d11a8f5fa9c3ac8 2025-08-14T22:07:22.3537591Z deleted: sha256:6f70653bcfea9c1dd39aba76713adac0ac8f6f4c202387ff86a3ffe45d2079f2 2025-08-14T22:07:22.3538234Z deleted: sha256:9a0ed45f26188ecbfcf7658f46e29922b441969b2aded64d1d6b287b6de2e49c 2025-08-14T22:07:22.3538859Z deleted: sha256:f84c75780b110e68f7593fe9592456387118761b365a954a105aee72016adeac 2025-08-14T22:07:22.3539473Z deleted: sha256:1a5a81f8cbb945eee96e25ee8b4958d7140bb6751b86bc2e4a6aa9e18a16846c 2025-08-14T22:07:22.3539948Z deleted: sha256:7e072dc6aa8c1831ddc97ba8229235081976cb8036c06ee1320b33606e03f9a4 2025-08-14T22:07:22.3540418Z deleted: sha256:369af3627df8ecb48c51ea4fd3267e561b2f6821075ddce314e9485494447f16 2025-08-14T22:07:22.3540882Z deleted: sha256:4d49b99f2eee0f82788e33a9c771f75b1411b0b70ce47771fc1b3bc160f23961 2025-08-14T22:07:22.3541357Z deleted: sha256:fe04dcb9c711f36f9ed1df5b2d0854d30dc5abaa6e6cd493b85d4c2e2d2c3e1b 2025-08-14T22:07:22.3541832Z deleted: sha256:4800771a0435c52d6e480540ffa8a65ecc51fdc82a91302c1a373e6021bc37ca 2025-08-14T22:07:22.3544418Z deleted: sha256:90a2bf02e851326fc70d05470553ed33e578342d6e06bfa0cfaf331c4079b7e4 2025-08-14T22:07:22.3544700Z 2025-08-14T22:07:22.3544798Z Total reclaimed space: 51.8GB 2025-08-14T22:07:22.3616738Z Post job cleanup. 2025-08-14T22:07:22.3677121Z Post job cleanup. 2025-08-14T22:07:22.4759683Z [command]/usr/bin/git version 2025-08-14T22:07:22.4806582Z git version 2.47.1 2025-08-14T22:07:22.4845458Z Copying '/home/ec2-user/.gitconfig' to '/home/ec2-user/actions-runner/_work/_temp/bee2cbc5-82c0-459e-af61-22c7ff31c2e2/.gitconfig' 2025-08-14T22:07:22.4866745Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/bee2cbc5-82c0-459e-af61-22c7ff31c2e2' before making global git config changes 2025-08-14T22:07:22.4867544Z Adding repository directory to the temporary git global config as a safe directory 2025-08-14T22:07:22.4868295Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-08-14T22:07:22.4915460Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-08-14T22:07:22.4942478Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-08-14T22:07:22.5307804Z Entering 'android/libs/fbjni' 2025-08-14T22:07:22.5373802Z Entering 'third_party/FP16' 2025-08-14T22:07:22.5441060Z Entering 'third_party/FXdiv' 2025-08-14T22:07:22.5496307Z Entering 'third_party/NNPACK' 2025-08-14T22:07:22.5566518Z Entering 'third_party/NVTX' 2025-08-14T22:07:22.5634846Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T22:07:22.5697056Z Entering 'third_party/XNNPACK' 2025-08-14T22:07:22.5773731Z Entering 'third_party/aiter' 2025-08-14T22:07:22.5837623Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T22:07:22.5914514Z Entering 'third_party/benchmark' 2025-08-14T22:07:22.5976209Z Entering 'third_party/composable_kernel' 2025-08-14T22:07:22.6050568Z Entering 'third_party/cpp-httplib' 2025-08-14T22:07:22.6108285Z Entering 'third_party/cpuinfo' 2025-08-14T22:07:22.6186549Z Entering 'third_party/cudnn_frontend' 2025-08-14T22:07:22.6248512Z Entering 'third_party/cutlass' 2025-08-14T22:07:22.6324216Z Entering 'third_party/fbgemm' 2025-08-14T22:07:22.6395348Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T22:07:22.6463329Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T22:07:22.6534162Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T22:07:22.6596523Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T22:07:22.6668637Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T22:07:22.6731364Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T22:07:22.6784391Z Entering 'third_party/fbgemm/external/json' 2025-08-14T22:07:22.6845234Z Entering 'third_party/flash-attention' 2025-08-14T22:07:22.6904354Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T22:07:22.6984399Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T22:07:22.7049159Z Entering 'third_party/flatbuffers' 2025-08-14T22:07:22.7117835Z Entering 'third_party/fmt' 2025-08-14T22:07:22.7181655Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T22:07:22.7234509Z Entering 'third_party/gloo' 2025-08-14T22:07:22.7290924Z Entering 'third_party/googletest' 2025-08-14T22:07:22.7360719Z Entering 'third_party/ideep' 2025-08-14T22:07:22.7422301Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T22:07:22.7494960Z Entering 'third_party/ittapi' 2025-08-14T22:07:22.7541167Z Entering 'third_party/kineto' 2025-08-14T22:07:22.7608119Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T22:07:22.7667005Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T22:07:22.7717094Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T22:07:22.7785019Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T22:07:22.7843307Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T22:07:22.7901538Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T22:07:22.7963452Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T22:07:22.8021365Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T22:07:22.8080223Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T22:07:22.8135216Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T22:07:22.8194753Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T22:07:22.8258280Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T22:07:22.8310312Z Entering 'third_party/kleidiai' 2025-08-14T22:07:22.8377078Z Entering 'third_party/mimalloc' 2025-08-14T22:07:22.8434526Z Entering 'third_party/nlohmann' 2025-08-14T22:07:22.8500222Z Entering 'third_party/onnx' 2025-08-14T22:07:22.8579309Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T22:07:22.8639263Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T22:07:22.8702087Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T22:07:22.8756540Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T22:07:22.8815459Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T22:07:22.8872742Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T22:07:22.8934907Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T22:07:22.8998916Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T22:07:22.9062922Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T22:07:22.9123000Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T22:07:22.9187033Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T22:07:22.9247688Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T22:07:22.9326527Z Entering 'third_party/pocketfft' 2025-08-14T22:07:22.9395876Z Entering 'third_party/protobuf' 2025-08-14T22:07:22.9462815Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T22:07:22.9521632Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T22:07:22.9585372Z Entering 'third_party/psimd' 2025-08-14T22:07:22.9641966Z Entering 'third_party/pthreadpool' 2025-08-14T22:07:22.9713558Z Entering 'third_party/pybind11' 2025-08-14T22:07:22.9772672Z Entering 'third_party/python-peachpy' 2025-08-14T22:07:22.9828923Z Entering 'third_party/sleef' 2025-08-14T22:07:22.9888818Z Entering 'third_party/tensorpipe' 2025-08-14T22:07:22.9970324Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T22:07:23.0023227Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T22:07:23.0087876Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T22:07:23.0137519Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T22:07:23.0206447Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T22:07:23.0282542Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-08-14T22:07:23.0304189Z http.https://github.com/.extraheader 2025-08-14T22:07:23.0312007Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader 2025-08-14T22:07:23.0341347Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-08-14T22:07:23.0685160Z Entering 'android/libs/fbjni' 2025-08-14T22:07:23.0726178Z http.https://github.com/.extraheader 2025-08-14T22:07:23.0759734Z Entering 'third_party/FP16' 2025-08-14T22:07:23.0799827Z http.https://github.com/.extraheader 2025-08-14T22:07:23.0844348Z Entering 'third_party/FXdiv' 2025-08-14T22:07:23.0898296Z http.https://github.com/.extraheader 2025-08-14T22:07:23.0932918Z Entering 'third_party/NNPACK' 2025-08-14T22:07:23.0985773Z http.https://github.com/.extraheader 2025-08-14T22:07:23.1028897Z Entering 'third_party/NVTX' 2025-08-14T22:07:23.1072906Z http.https://github.com/.extraheader 2025-08-14T22:07:23.1106329Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T22:07:23.1145874Z http.https://github.com/.extraheader 2025-08-14T22:07:23.1179282Z Entering 'third_party/XNNPACK' 2025-08-14T22:07:23.1221442Z http.https://github.com/.extraheader 2025-08-14T22:07:23.1278238Z Entering 'third_party/aiter' 2025-08-14T22:07:23.1322917Z http.https://github.com/.extraheader 2025-08-14T22:07:23.1361919Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T22:07:23.1402984Z http.https://github.com/.extraheader 2025-08-14T22:07:23.1453456Z Entering 'third_party/benchmark' 2025-08-14T22:07:23.1494778Z http.https://github.com/.extraheader 2025-08-14T22:07:23.1540987Z Entering 'third_party/composable_kernel' 2025-08-14T22:07:23.1586513Z http.https://github.com/.extraheader 2025-08-14T22:07:23.1641712Z Entering 'third_party/cpp-httplib' 2025-08-14T22:07:23.1681343Z http.https://github.com/.extraheader 2025-08-14T22:07:23.1715185Z Entering 'third_party/cpuinfo' 2025-08-14T22:07:23.1754772Z http.https://github.com/.extraheader 2025-08-14T22:07:23.1789103Z Entering 'third_party/cudnn_frontend' 2025-08-14T22:07:23.1828440Z http.https://github.com/.extraheader 2025-08-14T22:07:23.1863347Z Entering 'third_party/cutlass' 2025-08-14T22:07:23.1906211Z http.https://github.com/.extraheader 2025-08-14T22:07:23.1950561Z Entering 'third_party/fbgemm' 2025-08-14T22:07:23.1991796Z http.https://github.com/.extraheader 2025-08-14T22:07:23.2037158Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T22:07:23.2081439Z http.https://github.com/.extraheader 2025-08-14T22:07:23.2120019Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T22:07:23.2165157Z http.https://github.com/.extraheader 2025-08-14T22:07:23.2221836Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T22:07:23.2269899Z http.https://github.com/.extraheader 2025-08-14T22:07:23.2304925Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T22:07:23.2339783Z http.https://github.com/.extraheader 2025-08-14T22:07:23.2382254Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T22:07:23.2425823Z http.https://github.com/.extraheader 2025-08-14T22:07:23.2466348Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T22:07:23.2499608Z http.https://github.com/.extraheader 2025-08-14T22:07:23.2541806Z Entering 'third_party/fbgemm/external/json' 2025-08-14T22:07:23.2580618Z http.https://github.com/.extraheader 2025-08-14T22:07:23.2617219Z Entering 'third_party/flash-attention' 2025-08-14T22:07:23.2671187Z http.https://github.com/.extraheader 2025-08-14T22:07:23.2703138Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T22:07:23.2741738Z http.https://github.com/.extraheader 2025-08-14T22:07:23.2779180Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T22:07:23.2830817Z http.https://github.com/.extraheader 2025-08-14T22:07:23.2876335Z Entering 'third_party/flatbuffers' 2025-08-14T22:07:23.2917673Z http.https://github.com/.extraheader 2025-08-14T22:07:23.2952484Z Entering 'third_party/fmt' 2025-08-14T22:07:23.2994789Z http.https://github.com/.extraheader 2025-08-14T22:07:23.3036982Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T22:07:23.3078490Z http.https://github.com/.extraheader 2025-08-14T22:07:23.3108312Z Entering 'third_party/gloo' 2025-08-14T22:07:23.3147000Z http.https://github.com/.extraheader 2025-08-14T22:07:23.3178709Z Entering 'third_party/googletest' 2025-08-14T22:07:23.3221623Z http.https://github.com/.extraheader 2025-08-14T22:07:23.3254830Z Entering 'third_party/ideep' 2025-08-14T22:07:23.3297311Z http.https://github.com/.extraheader 2025-08-14T22:07:23.3332017Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T22:07:23.3380775Z http.https://github.com/.extraheader 2025-08-14T22:07:23.3422088Z Entering 'third_party/ittapi' 2025-08-14T22:07:23.3464506Z http.https://github.com/.extraheader 2025-08-14T22:07:23.3502776Z Entering 'third_party/kineto' 2025-08-14T22:07:23.3548216Z http.https://github.com/.extraheader 2025-08-14T22:07:23.3589542Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T22:07:23.3628104Z http.https://github.com/.extraheader 2025-08-14T22:07:23.3672597Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T22:07:23.3715780Z http.https://github.com/.extraheader 2025-08-14T22:07:23.3760469Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T22:07:23.3807709Z http.https://github.com/.extraheader 2025-08-14T22:07:23.3853377Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T22:07:23.3888863Z http.https://github.com/.extraheader 2025-08-14T22:07:23.3933303Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T22:07:23.3973191Z http.https://github.com/.extraheader 2025-08-14T22:07:23.4019891Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T22:07:23.4059416Z http.https://github.com/.extraheader 2025-08-14T22:07:23.4105342Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T22:07:23.4150574Z http.https://github.com/.extraheader 2025-08-14T22:07:23.4190958Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T22:07:23.4237183Z http.https://github.com/.extraheader 2025-08-14T22:07:23.4270055Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T22:07:23.4313437Z http.https://github.com/.extraheader 2025-08-14T22:07:23.4360285Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T22:07:23.4401080Z http.https://github.com/.extraheader 2025-08-14T22:07:23.4444662Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T22:07:23.4486361Z http.https://github.com/.extraheader 2025-08-14T22:07:23.4525307Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T22:07:23.4567066Z http.https://github.com/.extraheader 2025-08-14T22:07:23.4610533Z Entering 'third_party/kleidiai' 2025-08-14T22:07:23.4653536Z http.https://github.com/.extraheader 2025-08-14T22:07:23.4687662Z Entering 'third_party/mimalloc' 2025-08-14T22:07:23.4728133Z http.https://github.com/.extraheader 2025-08-14T22:07:23.4763138Z Entering 'third_party/nlohmann' 2025-08-14T22:07:23.4813034Z http.https://github.com/.extraheader 2025-08-14T22:07:23.4848350Z Entering 'third_party/onnx' 2025-08-14T22:07:23.4887540Z http.https://github.com/.extraheader 2025-08-14T22:07:23.4962778Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T22:07:23.4996503Z http.https://github.com/.extraheader 2025-08-14T22:07:23.5034109Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T22:07:23.5080293Z http.https://github.com/.extraheader 2025-08-14T22:07:23.5121902Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T22:07:23.5156744Z http.https://github.com/.extraheader 2025-08-14T22:07:23.5198116Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T22:07:23.5236559Z http.https://github.com/.extraheader 2025-08-14T22:07:23.5271791Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T22:07:23.5309763Z http.https://github.com/.extraheader 2025-08-14T22:07:23.5354368Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T22:07:23.5398075Z http.https://github.com/.extraheader 2025-08-14T22:07:23.5427410Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T22:07:23.5470243Z http.https://github.com/.extraheader 2025-08-14T22:07:23.5503787Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T22:07:23.5546750Z http.https://github.com/.extraheader 2025-08-14T22:07:23.5585073Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T22:07:23.5630233Z http.https://github.com/.extraheader 2025-08-14T22:07:23.5675247Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T22:07:23.5729170Z http.https://github.com/.extraheader 2025-08-14T22:07:23.5763270Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T22:07:23.5805369Z http.https://github.com/.extraheader 2025-08-14T22:07:23.5847320Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T22:07:23.5886578Z http.https://github.com/.extraheader 2025-08-14T22:07:23.5946787Z Entering 'third_party/pocketfft' 2025-08-14T22:07:23.5990409Z http.https://github.com/.extraheader 2025-08-14T22:07:23.6024434Z Entering 'third_party/protobuf' 2025-08-14T22:07:23.6064641Z http.https://github.com/.extraheader 2025-08-14T22:07:23.6099103Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T22:07:23.6141556Z http.https://github.com/.extraheader 2025-08-14T22:07:23.6184953Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T22:07:23.6227932Z http.https://github.com/.extraheader 2025-08-14T22:07:23.6270128Z Entering 'third_party/psimd' 2025-08-14T22:07:23.6325299Z http.https://github.com/.extraheader 2025-08-14T22:07:23.6357068Z Entering 'third_party/pthreadpool' 2025-08-14T22:07:23.6397828Z http.https://github.com/.extraheader 2025-08-14T22:07:23.6442744Z Entering 'third_party/pybind11' 2025-08-14T22:07:23.6495259Z http.https://github.com/.extraheader 2025-08-14T22:07:23.6533390Z Entering 'third_party/python-peachpy' 2025-08-14T22:07:23.6582071Z http.https://github.com/.extraheader 2025-08-14T22:07:23.6613837Z Entering 'third_party/sleef' 2025-08-14T22:07:23.6655085Z http.https://github.com/.extraheader 2025-08-14T22:07:23.6699657Z Entering 'third_party/tensorpipe' 2025-08-14T22:07:23.6746280Z http.https://github.com/.extraheader 2025-08-14T22:07:23.6778520Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T22:07:23.6821130Z http.https://github.com/.extraheader 2025-08-14T22:07:23.6851510Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T22:07:23.6889762Z http.https://github.com/.extraheader 2025-08-14T22:07:23.6925181Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T22:07:23.6977837Z http.https://github.com/.extraheader 2025-08-14T22:07:23.7007678Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T22:07:23.7051544Z http.https://github.com/.extraheader 2025-08-14T22:07:23.7089934Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T22:07:23.7126956Z http.https://github.com/.extraheader 2025-08-14T22:07:23.7257392Z A job completed hook has been configured by the self-hosted runner administrator 2025-08-14T22:07:23.7275189Z ##[group]Run '/home/ec2-user/runner-scripts/after_job.sh' 2025-08-14T22:07:23.7279785Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:07:23.7280073Z ##[endgroup] 2025-08-14T22:07:23.7387368Z [!ALERT!] Swap in detected! [!ALERT!] 2025-08-14T22:07:35.4110269Z [!ALERT!] Swap out detected [!ALERT!] 2025-08-14T22:07:55.0511837Z Cleaning up orphan processes